ROI on yield data analysis systems through a business process management strategy
NASA Astrophysics Data System (ADS)
Rehani, Manu; Strader, Nathan; Hanson, Jeff
2005-05-01
The overriding motivation for yield engineering is profitability. This is achieved through application of yield management. The first application is to continually reduce waste in the form of yield loss. New products, new technologies and the dynamic state of the process and equipment keep introducing new ways to cause yield loss. In response, the yield management efforts have to continually come up with new solutions to minimize it. The second application of yield engineering is to aid in accurate product pricing. This is achieved through predicting future results of the yield engineering effort. The more accurate the yield prediction, the more accurate the wafer start volume, the more accurate the wafer pricing. Another aspect of yield prediction pertains to gauging the impact of a yield problem and predicting how long that will last. The ability to predict such impacts again feeds into wafer start calculations and wafer pricing. The question then is that if the stakes on yield management are so high why is it that most yield management efforts are run like science and engineering projects and less like manufacturing? In the eighties manufacturing put the theory of constraints1 into practice and put a premium on stability and predictability in manufacturing activities, why can't the same be done for yield management activities? This line of introspection led us to define and implement a business process to manage the yield engineering activities. We analyzed the best known methods (BKM) and deployed a workflow tool to make them the standard operating procedure (SOP) for yield managment. We present a case study in deploying a Business Process Management solution for Semiconductor Yield Engineering in a high-mix ASIC environment. We will present a description of the situation prior to deployment, a window into the development process and a valuation of the benefits.
Shackelford, S D; Wheeler, T L; Koohmaraie, M
2003-01-01
The present experiment was conducted to evaluate the ability of the U.S. Meat Animal Research Center's beef carcass image analysis system to predict calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score under commercial beef processing conditions. In two commercial beef-processing facilities, image analysis was conducted on 800 carcasses on the beef-grading chain immediately after the conventional USDA beef quality and yield grades were applied. Carcasses were blocked by plant and observed calculated yield grade. The carcasses were then separated, with 400 carcasses assigned to a calibration data set that was used to develop regression equations, and the remaining 400 carcasses assigned to a prediction data set used to validate the regression equations. Prediction equations, which included image analysis variables and hot carcass weight, accounted for 90, 88, 90, 88, and 76% of the variation in calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score, respectively, in the prediction data set. In comparison, the official USDA yield grade as applied by online graders accounted for 73% of the variation in calculated yield grade. The technology described herein could be used by the beef industry to more accurately determine beef yield grades; however, this system does not provide an accurate enough prediction of marbling score to be used without USDA grader interaction for USDA quality grading.
Atomic Oxygen Erosion Yield Prediction for Spacecraft Polymers in Low Earth Orbit
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Backus, Jane A.; Manno, Michael V.; Waters, Deborah L.; Cameron, Kevin C.; deGroh, Kim K.
2009-01-01
The ability to predict the atomic oxygen erosion yield of polymers based on their chemistry and physical properties has been only partially successful because of a lack of reliable low Earth orbit (LEO) erosion yield data. Unfortunately, many of the early experiments did not utilize dehydrated mass loss measurements for erosion yield determination, and the resulting mass loss due to atomic oxygen exposure may have been compromised because samples were often not in consistent states of dehydration during the pre-flight and post-flight mass measurements. This is a particular problem for short duration mission exposures or low erosion yield materials. However, as a result of the retrieval of the Polymer Erosion and Contamination Experiment (PEACE) flown as part of the Materials International Space Station Experiment 2 (MISSE 2), the erosion yields of 38 polymers and pyrolytic graphite were accurately measured. The experiment was exposed to the LEO environment for 3.95 years from August 16, 2001 to July 30, 2005 and was successfully retrieved during a space walk on July 30, 2005 during Discovery s STS-114 Return to Flight mission. The 40 different materials tested (including Kapton H fluence witness samples) were selected specifically to represent a variety of polymers used in space as well as a wide variety of polymer chemical structures. The MISSE 2 PEACE Polymers experiment used carefully dehydrated mass measurements, as well as accurate density measurements to obtain accurate erosion yield data for high-fluence (8.43 1021 atoms/sq cm). The resulting data was used to develop an erosion yield predictive tool with a correlation coefficient of 0.895 and uncertainty of +/-6.3 10(exp -25)cu cm/atom. The predictive tool utilizes the chemical structures and physical properties of polymers to predict in-space atomic oxygen erosion yields. A predictive tool concept (September 2009 version) is presented which represents an improvement over an earlier (December 2008) version.
NASA Astrophysics Data System (ADS)
Cai, Y.
2017-12-01
Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.
Payne, Courtney E; Wolfrum, Edward J
2015-01-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. We present individual model statistics to demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. It is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Improving Seasonal Crop Monitoring and Forecasting for Soybean and Corn in Iowa
NASA Astrophysics Data System (ADS)
Togliatti, K.; Archontoulis, S.; Dietzel, R.; VanLoocke, A.
2016-12-01
Accurately forecasting crop yield in advance of harvest could greatly benefit farmers, however few evaluations have been conducted to determine the effectiveness of forecasting methods. We tested one such method that used a combination of short-term weather forecasting from the Weather Research and Forecasting Model (WRF) to predict in season weather variables, such as, maximum and minimum temperature, precipitation and radiation at 4 different forecast lengths (2 weeks, 1 week, 3 days, and 0 days). This forecasted weather data along with the current and historic (previous 35 years) data from the Iowa Environmental Mesonet was combined to drive Agricultural Production Systems sIMulator (APSIM) simulations to forecast soybean and corn yields in 2015 and 2016. The goal of this study is to find the forecast length that reduces the variability of simulated yield predictions while also increasing the accuracy of those predictions. APSIM simulations of crop variables were evaluated against bi-weekly field measurements of phenology, biomass, and leaf area index from early and late planted soybean plots located at the Agricultural Engineering and Agronomy Research Farm in central Iowa as well as the Northwest Research Farm in northwestern Iowa. WRF model predictions were evaluated against observed weather data collected at the experimental fields. Maximum temperature was the most accurately predicted variable, followed by minimum temperature and radiation, and precipitation was least accurate according to RMSE values and the number of days that were forecasted within a 20% error of the observed weather. Our analysis indicated that for the majority of months in the growing season the 3 day forecast performed the best. The 1 week forecast came in second and the 2 week forecast was the least accurate for the majority of months. Preliminary results for yield indicate that the 2 week forecast is the least variable of the forecast lengths, however it also is the least accurate. The 3 day and 1 week forecast have a better accuracy, with an increase in variability.
Biaxial Testing of 2219-T87 Aluminum Alloy Using Cruciform Specimens
NASA Technical Reports Server (NTRS)
Dawicke, D. S.; Pollock, W. D.
1997-01-01
A cruciform biaxial test specimen was designed and seven biaxial tensile tests were conducted on 2219-T87 aluminum alloy. An elastic-plastic finite element analysis was used to simulate each tests and predict the yield stresses. The elastic-plastic finite analysis accurately simulated the measured load-strain behavior for each test. The yield stresses predicted by the finite element analyses indicated that the yield behavior of the 2219-T87 aluminum alloy agrees with the von Mises yield criterion.
Real-time yield estimation based on deep learning
NASA Astrophysics Data System (ADS)
Rahnemoonfar, Maryam; Sheppard, Clay
2017-05-01
Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.
Payne, Courtney E.; Wolfrum, Edward J.
2015-03-12
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payne, Courtney E.; Wolfrum, Edward J.
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate resultsmore » from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
An evaluation of the lamb vision system as a predictor of lamb carcass red meat yield percentage.
Brady, A S; Belk, K E; LeValley, S B; Dalsted, N L; Scanga, J A; Tatum, J D; Smith, G C
2003-06-01
An objective method for predicting red meat yield in lamb carcasses is needed to accurately assess true carcass value. This study was performed to evaluate the ability of the lamb vision system (LVS; Research Management Systems USA, Fort Collins, CO) to predict fabrication yields of lamb carcasses. Lamb carcasses (n = 246) were evaluated using LVS and hot carcass weight (HCW), as well as by USDA expert and on-line graders, before fabrication of carcass sides to either bone-in or boneless cuts. On-line whole number, expert whole-number, and expert nearest-tenth USDA yield grades and LVS + HCW estimates accounted for 53, 52, 58, and 60%, respectively, of the observed variability in boneless, saleable meat yields, and accounted for 56, 57, 62, and 62%, respectively, of the variation in bone-in, saleable meat yields. The LVS + HCW system predicted 77, 65, 70, and 87% of the variation in weights of boneless shoulders, racks, loins, and legs, respectively, and 85, 72, 75, and 86% of the variation in weights of bone-in shoulders, racks, loins, and legs, respectively. Addition of longissimus muscle area (REA), adjusted fat thickness (AFT), or both REA and AFT to LVS + HCW models resulted in improved prediction of boneless saleable meat yields by 5, 3, and 5 percentage points, respectively. Bone-in, saleable meat yield estimations were improved in predictive accuracy by 7.7, 6.6, and 10.1 percentage points, and in precision, when REA alone, AFT alone, or both REA and AFT, respectively, were added to the LVS + HCW output models. Use of LVS + HCW to predict boneless red meat yields of lamb carcasses was more accurate than use of current on-line whole-number, expert whole-number, or expert nearest-tenth USDA yield grades. Thus, LVS + HCW output, when used alone or in combination with AFT and/or REA, improved on-line estimation of boneless cut yields from lamb carcasses. The ability of LVS + HCW to predict yields of wholesale cuts suggests that LVS could be used as an objective means for pricing carcasses in a value-based marketing system.
Representing winter wheat in the Community Land Model (version 4.5)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less
Representing winter wheat in the Community Land Model (version 4.5)
NASA Astrophysics Data System (ADS)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; Torn, Margaret S.; Kueppers, Lara M.
2017-05-01
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange of CO2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.
Representing winter wheat in the Community Land Model (version 4.5)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; ...
2017-05-05
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less
Murrell, Ebony G.; Juliano, Steven A.
2012-01-01
Resource competition theory predicts that R*, the equilibrium resource amount yielding zero growth of a consumer population, should predict species' competitive abilities for that resource. This concept has been supported for unicellular organisms, but has not been well-tested for metazoans, probably due to the difficulty of raising experimental populations to equilibrium and measuring population growth rates for species with long or complex life cycles. We developed an index (Rindex) of R* based on demography of one insect cohort, growing from egg to adult in a non-equilibrium setting, and tested whether Rindex yielded accurate predictions of competitive abilities using mosquitoes as a model system. We estimated finite rate of increase (λ′) from demographic data for cohorts of three mosquito species raised with different detritus amounts, and estimated each species' Rindex using nonlinear regressions of λ′ vs. initial detritus amount. All three species' Rindex differed significantly, and accurately predicted competitive hierarchy of the species determined in simultaneous pairwise competition experiments. Our Rindex could provide estimates and rigorous statistical comparisons of competitive ability for organisms for which typical chemostat methods and equilibrium population conditions are impractical. PMID:22970128
Random Forests for Global and Regional Crop Yield Predictions.
Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung
2016-01-01
Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.
Wang, Yan-Bin; Hu, Yu-Zhong; Li, Wen-Le; Zhang, Wei-Song; Zhou, Feng; Luo, Zhi
2014-10-01
In the present paper, based on the fast evaluation technique of near infrared, a method to predict the yield of atmos- pheric and vacuum line was developed, combined with H/CAMS software. Firstly, the near-infrared (NIR) spectroscopy method for rapidly determining the true boiling point of crude oil was developed. With commercially available crude oil spectroscopy da- tabase and experiments test from Guangxi Petrochemical Company, calibration model was established and a topological method was used as the calibration. The model can be employed to predict the true boiling point of crude oil. Secondly, the true boiling point based on NIR rapid assay was converted to the side-cut product yield of atmospheric/vacuum distillation unit by H/CAMS software. The predicted yield and the actual yield of distillation product for naphtha, diesel, wax and residual oil were compared in a 7-month period. The result showed that the NIR rapid crude assay can predict the side-cut product yield accurately. The near infrared analytic method for predicting yield has the advantages of fast analysis, reliable results, and being easy to online operate, and it can provide elementary data for refinery planning optimization and crude oil blending.
Predicting maize phenology: Intercomparison of functions for developmental response to temperature
USDA-ARS?s Scientific Manuscript database
Accurate prediction of phenological development in maize is fundamental to determining crop adaptation and yield potential. A number of thermal functions are used in crop models, but their relative precision in predicting maize development has not been quantified. The objectives of this study were t...
Constitutive Modeling of Piezoelectric Polymer Composites
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Tom (Technical Monitor)
2003-01-01
A new modeling approach is proposed for predicting the bulk electromechanical properties of piezoelectric composites. The proposed model offers the same level of convenience as the well-known Mori-Tanaka method. In addition, it is shown to yield predicted properties that are, in most cases, more accurate or equally as accurate as the Mori-Tanaka scheme. In particular, the proposed method is used to determine the electromechanical properties of four piezoelectric polymer composite materials as a function of inclusion volume fraction. The predicted properties are compared to those calculated using the Mori-Tanaka and finite element methods.
Atomic Oxygen Erosion Yield Predictive Tool for Spacecraft Polymers in Low Earth Orbit
NASA Technical Reports Server (NTRS)
Bank, Bruce A.; de Groh, Kim K.; Backus, Jane A.
2008-01-01
A predictive tool was developed to estimate the low Earth orbit (LEO) atomic oxygen erosion yield of polymers based on the results of the Polymer Erosion and Contamination Experiment (PEACE) Polymers experiment flown as part of the Materials International Space Station Experiment 2 (MISSE 2). The MISSE 2 PEACE experiment accurately measured the erosion yield of a wide variety of polymers and pyrolytic graphite. The 40 different materials tested were selected specifically to represent a variety of polymers used in space as well as a wide variety of polymer chemical structures. The resulting erosion yield data was used to develop a predictive tool which utilizes chemical structure and physical properties of polymers that can be measured in ground laboratory testing to predict the in-space atomic oxygen erosion yield of a polymer. The properties include chemical structure, bonding information, density and ash content. The resulting predictive tool has a correlation coefficient of 0.914 when compared with actual MISSE 2 space data for 38 polymers and pyrolytic graphite. The intent of the predictive tool is to be able to make estimates of atomic oxygen erosion yields for new polymers without requiring expensive and time consumptive in-space testing.
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate.
Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Thorburn, Peter J; Castellano, Michael J; Moore, Kenneth J; VanLoocke, Andrew; Heaton, Emily A; Archontoulis, Sotirios V
2018-01-01
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time ( R 2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity ( R 2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined ( n = 31) with an average error range of ±38 kg N ha -1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost.
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate
Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Thorburn, Peter J.; Castellano, Michael J.; Moore, Kenneth J.; VanLoocke, Andrew; Heaton, Emily A.; Archontoulis, Sotirios V.
2018-01-01
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time (R2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity (R2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined (n = 31) with an average error range of ±38 kg N ha−1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost. PMID:29706974
Residual Strength Prediction of Fuselage Structures with Multiple Site Damage
NASA Technical Reports Server (NTRS)
Chen, Chuin-Shan; Wawrzynek, Paul A.; Ingraffea, Anthony R.
1999-01-01
This paper summarizes recent results on simulating full-scale pressure tests of wide body, lap-jointed fuselage panels with multiple site damage (MSD). The crack tip opening angle (CTOA) fracture criterion and the FRANC3D/STAGS software program were used to analyze stable crack growth under conditions of general yielding. The link-up of multiple cracks and residual strength of damaged structures were predicted. Elastic-plastic finite element analysis based on the von Mises yield criterion and incremental flow theory with small strain assumption was used. A global-local modeling procedure was employed in the numerical analyses. Stress distributions from the numerical simulations are compared with strain gage measurements. Analysis results show that accurate representation of the load transfer through the rivets is crucial for the model to predict the stress distribution accurately. Predicted crack growth and residual strength are compared with test data. Observed and predicted results both indicate that the occurrence of small MSD cracks substantially reduces the residual strength. Modeling fatigue closure is essential to capture the fracture behavior during the early stable crack growth. Breakage of a tear strap can have a major influence on residual strength prediction.
Comparison of wheat yield simulated using three N cycling options in the SWAT model
USDA-ARS?s Scientific Manuscript database
The Soil and Water Assessment Tool (SWAT) model has been successfully used to predict alterations in streamflow, evapotranspiration and soil water; however, it is not clear how effective or accurate SWAT is at predicting crop growth. Previous research suggests that while the hydrologic balance in e...
Ren, Jingzheng
2018-01-01
Anaerobic digestion process has been recognized as a promising way for waste treatment and energy recovery in a sustainable way. Modelling of anaerobic digestion system is significantly important for effectively and accurately controlling, adjusting, and predicting the system for higher methane yield. The GM(1,N) approach which does not need the mechanism or a large number of samples was employed to model the anaerobic digestion system to predict methane yield. In order to illustrate the proposed model, an illustrative case about anaerobic digestion of municipal solid waste for methane yield was studied, and the results demonstrate that GM(1,N) model can effectively simulate anaerobic digestion system at the cases of poor information with less computational expense. Copyright © 2017 Elsevier Ltd. All rights reserved.
Invited review: A commentary on predictive cheese yield formulas.
Emmons, D B; Modler, H W
2010-12-01
Predictive cheese yield formulas have evolved from one based only on casein and fat in 1895. Refinements have included moisture and salt in cheese and whey solids as separate factors, paracasein instead of casein, and exclusion of whey solids from moisture associated with cheese protein. The General, Barbano, and Van Slyke formulas were tested critically using yield and composition of milk, whey, and cheese from 22 vats of Cheddar cheese. The General formula is based on the sum of cheese components: fat, protein, moisture, salt, whey solids free of fat and protein, as well as milk salts associated with paracasein. The testing yielded unexpected revelations. It was startling that the sum of components in cheese was <100%; the mean was 99.51% (N × 6.31). The mean predicted yield was only 99.17% as a percentage of actual yields (PY%AY); PY%AY is a useful term for comparisons of yields among vats. The PY%AY correlated positively with the sum of components (SofC) in cheese. The apparent low estimation of SofC led to the idea of adjusting upwards, for each vat, the 5 measured components in the formula by the observed SofC, as a fraction. The mean of the adjusted predicted yields as percentages of actual yields was 99.99%. The adjusted forms of the General, Barbano, and Van Slyke formulas gave predicted yields equal to the actual yields. It was apparent that unadjusted yield formulas did not accurately predict yield; however, unadjusted PY%AY can be useful as a control tool for analyses of cheese and milk. It was unexpected that total milk protein in the adjusted General formula gave the same predicted yields as casein and paracasein, indicating that casein or paracasein may not always be necessary for successful yield prediction. The use of constants for recovery of fat and protein in the adjusted General formula gave adjusted predicted yields equal to actual yields, indicating that analyses of cheese for protein and fat may not always be necessary for yield prediction. Composition of cheese was estimated using a predictive formula; actual yield was needed for estimation of composition. Adjusted formulas are recommended for estimating target yields and cheese yield efficiency. Constants for solute exclusion, protein-associated milk salts, and whey solids could be used and reduced the complexity of the General formula. Normalization of fat recovery increased variability of predicted yields. Copyright © 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Bernard R. Parresol; Steven C. Stedman
2004-01-01
The accuracy of forest growth and yield forecasts affects the quality of forest management decisions (Rauscher et al. 2000). Users of growth and yield models want assurance that model outputs are reasonable and mimic local/regional forest structure and composition and accurately reflect the influences of stand dynamics such as competition and disturbance. As such,...
Comparison of statistical models for analyzing wheat yield time series.
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.
Hydrostatic Stress Effect On the Yield Behavior of Inconel 100
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wilson, Christopher D.
2002-01-01
Classical metal plasticity theory assumes that hydrostatic stress has no effect on the yield and postyield behavior of metals. Recent reexaminations of classical theory have revealed a significant effect of hydrostatic stress on the yield behavior of notched geometries. New experiments and nonlinear finite element analyses (FEA) of Inconel 100 (IN 100) equal-arm bend and double-edge notch tension (DENT) test specimens have revealed the effect of internal hydrostatic tensile stresses on yielding. Nonlinear FEA using the von Mises (yielding is independent of hydrostatic stress) and the Drucker-Prager (yielding is linearly dependent on hydrostatic stress) yield functions was performed. In all test cases, the von Mises constitutive model, which is independent of hydrostatic pressure, overestimated the load for a given displacement or strain. Considering the failure displacements or strains, the Drucker-Prager FEMs predicted loads that were 3% to 5% lower than the von Mises values. For the failure loads, the Drucker Prager FEMs predicted strains that were 20% to 35% greater than the von Mises values. The Drucker-Prager yield function seems to more accurately predict the overall specimen response of geometries with significant internal hydrostatic stress influence.
Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David
2017-10-01
Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.
A Battery Health Monitoring Framework for Planetary Rovers
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Kulkarni, Chetan Shrikant
2014-01-01
Batteries have seen an increased use in electric ground and air vehicles for commercial, military, and space applications as the primary energy source. An important aspect of using batteries in such contexts is battery health monitoring. Batteries must be carefully monitored such that the battery health can be determined, and end of discharge and end of usable life events may be accurately predicted. For planetary rovers, battery health estimation and prediction is critical to mission planning and decision-making. We develop a model-based approach utilizing computaitonally efficient and accurate electrochemistry models of batteries. An unscented Kalman filter yields state estimates, which are then used to predict the future behavior of the batteries and, specifically, end of discharge. The prediction algorithm accounts for possible future power demands on the rover batteries in order to provide meaningful results and an accurate representation of prediction uncertainty. The framework is demonstrated on a set of lithium-ion batteries powering a rover at NASA.
Sankey, Joel B.; McVay, Jason C.; Kreitler, Jason R.; Hawbaker, Todd J.; Vaillant, Nicole; Lowe, Scott
2015-01-01
Increased sedimentation following wildland fire can negatively impact water supply and water quality. Understanding how changing fire frequency, extent, and location will affect watersheds and the ecosystem services they supply to communities is of great societal importance in the western USA and throughout the world. In this work we assess the utility of the InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs) Sediment Retention Model to accurately characterize erosion and sedimentation of burned watersheds. InVEST was developed by the Natural Capital Project at Stanford University (Tallis et al., 2014) and is a suite of GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., USLE – Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. In this study, we evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured postfire sediment yields available for many watersheds throughout the western USA from an existing, published large database. We show that the model can be parameterized in a relatively simple fashion to predict post-fire sediment yield with accuracy. Our ultimate goal is to use the model to accurately predict variability in post-fire sediment yield at a watershed scale as a function of future wildfire conditions.
Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill
2017-01-01
Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875
Application of JAERI quantum molecular dynamics model for collisions of heavy nuclei
NASA Astrophysics Data System (ADS)
Ogawa, Tatsuhiko; Hashimoto, Shintaro; Sato, Tatsuhiko; Niita, Koji
2016-06-01
The quantum molecular dynamics (QMD) model incorporated into the general-purpose radiation transport code PHITS was revised for accurate prediction of fragment yields in peripheral collisions. For more accurate simulation of peripheral collisions, stability of the nuclei at their ground state was improved and the algorithm to reject invalid events was modified. In-medium correction on nucleon-nucleon cross sections was also considered. To clarify the effect of this improvement on fragmentation of heavy nuclei, the new QMD model coupled with a statistical decay model was used to calculate fragment production cross sections of Ag and Au targets and compared with the data of earlier measurement. It is shown that the revised version can predict cross section more accurately.
Comparison of Statistical Models for Analyzing Wheat Yield Time Series
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG,YIFENG; XU,HUIFANG
Correctly identifying the possible alteration products and accurately predicting their occurrence in a repository-relevant environment are the key for the source-term calculation in a repository performance assessment. Uraninite in uranium deposits has long been used as a natural analog to spent fuel in a repository because of their chemical and structural similarity. In this paper, a SEM/AEM investigation has been conducted on a partially alternated uraninite sample from a uranium ore deposit of Shinkolobwe of Congo. The mineral formation sequences were identified: uraninite {yields} uranyl hydrates {yields} uranyl silicates {yields} Ca-uranyl silicates or uraninite {yields} uranyl silicates {yields} Ca-uranyl silicates.more » Reaction-path calculations were conducted for the oxidative dissolution of spent fuel in a representative Yucca Mountain groundwater. The predicted sequence is in general consistent with the SEM observations. The calculations also show that uranium carbonate minerals are unlikely to become major solubility-controlling mineral phases in a Yucca Mountain environment. Some discrepancies between model predictions and field observations are observed. Those discrepancies may result from poorly constrained thermodynamic data for uranyl silicate minerals.« less
Using artificial neural network and satellite data to predict rice yield in Bangladesh
NASA Astrophysics Data System (ADS)
Akhand, Kawsar; Nizamuddin, Mohammad; Roytman, Leonid; Kogan, Felix; Goldberg, Mitch
2015-09-01
Rice production in Bangladesh is a crucial part of the national economy and providing about 70 percent of an average citizen's total calorie intake. The demand for rice is constantly rising as the new populations are added in every year in Bangladesh. Due to the increase in population, the cultivation land decreases. In addition, Bangladesh is faced with production constraints such as drought, flooding, salinity, lack of irrigation facilities and lack of modern technology. To maintain self sufficiency in rice, Bangladesh will have to continue to expand rice production by increasing yield at a rate that is at least equal to the population growth until the demand of rice has stabilized. Accurate rice yield prediction is one of the most important challenges in managing supply and demand of rice as well as decision making processes. Artificial Neural Network (ANN) is used to construct a model to predict Aus rice yield in Bangladesh. Advanced Very High Resolution Radiometer (AVHRR)-based remote sensing satellite data vegetation health (VH) indices (Vegetation Condition Index (VCI) and Temperature Condition Index (TCI) are used as input variables and official statistics of Aus rice yield is used as target variable for ANN prediction model. The result obtained with ANN method is encouraging and the error of prediction is less than 10%. Therefore, prediction can play an important role in planning and storing of sufficient rice to face in any future uncertainty.
Comparison of forward flight effects theory of A. Michalke and U. Michel with measured data
NASA Technical Reports Server (NTRS)
Rawls, J. W., Jr.
1983-01-01
The scaling laws of a Michalke and Michel predict flyover noise of a single stream shock free circular jet from static data or static predictions. The theory is based on a farfield solution to Lighthill's equation and includes density terms which are important for heated jets. This theory is compared with measured data using two static jet noise prediction methods. The comparisons indicate the theory yields good results when the static noise levels are accurately predicted.
Cannell, R C; Belk, K E; Tatum, J D; Wise, J W; Chapman, P L; Scanga, J A; Smith, G C
2002-05-01
Objective quantification of differences in wholesale cut yields of beef carcasses at plant chain speeds is important for the application of value-based marketing. This study was conducted to evaluate the ability of a commercial video image analysis system, the Computer Vision System (CVS) to 1) predict commercially fabricated beef subprimal yield and 2) augment USDA yield grading, in order to improve accuracy of grade assessment. The CVS was evaluated as a fully installed production system, operating on a full-time basis at chain speeds. Steer and heifer carcasses (n = 296) were evaluated using CVS, as well as by USDA expert and online graders, before the fabrication of carcasses into industry-standard subprimal cuts. Expert yield grade (YG), online YG, CVS estimated carcass yield, and CVS measured ribeye area in conjunction with expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, hot carcass weight) accounted for 67, 39, 64, and 65% of the observed variation in fabricated yields of closely trimmed subprimals. The dual component CVS predicted wholesale cut yields more accurately than current online yield grading, and, in an augmentation system, CVS ribeye measurement replaced estimated ribeye area in determination of USDA yield grade, and the accuracy of cutability prediction was improved, under packing plant conditions and speeds, to a level close to that of expert graders applying grades at a comfortable rate of speed offline.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe 3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe 3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yieldmore » accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
Molecular determinants of blood-brain barrier permeation.
Geldenhuys, Werner J; Mohammad, Afroz S; Adkins, Chris E; Lockman, Paul R
2015-01-01
The blood-brain barrier (BBB) is a microvascular unit which selectively regulates the permeability of drugs to the brain. With the rise in CNS drug targets and diseases, there is a need to be able to accurately predict a priori which compounds in a company database should be pursued for favorable properties. In this review, we will explore the different computational tools available today, as well as underpin these to the experimental methods used to determine BBB permeability. These include in vitro models and the in vivo models that yield the dataset we use to generate predictive models. Understanding of how these models were experimentally derived determines our accurate and predicted use for determining a balance between activity and BBB distribution.
Molecular determinants of blood–brain barrier permeation
Geldenhuys, Werner J; Mohammad, Afroz S; Adkins, Chris E; Lockman, Paul R
2015-01-01
The blood–brain barrier (BBB) is a microvascular unit which selectively regulates the permeability of drugs to the brain. With the rise in CNS drug targets and diseases, there is a need to be able to accurately predict a priori which compounds in a company database should be pursued for favorable properties. In this review, we will explore the different computational tools available today, as well as underpin these to the experimental methods used to determine BBB permeability. These include in vitro models and the in vivo models that yield the dataset we use to generate predictive models. Understanding of how these models were experimentally derived determines our accurate and predicted use for determining a balance between activity and BBB distribution. PMID:26305616
Modeling central metabolism and energy biosynthesis across microbial life
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal; ...
2016-08-08
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life.
Edirisinghe, Janaka N; Weisenhorn, Pamela; Conrad, Neal; Xia, Fangfang; Overbeek, Ross; Stevens, Rick L; Henry, Christopher S
2016-08-08
Automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. To overcome this challenge, we developed methods and tools ( http://coremodels.mcs.anl.gov ) to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of model organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. We predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.
Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture
Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang
2016-01-01
The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176
Lu, Xiaowei; Berge, Nicole D
2014-08-01
As the exploration of the carbonization of mixed feedstocks continues, there is a distinct need to understand how feedstock chemical composition and structural complexity influence the composition of generated products. Laboratory experiments were conducted to evaluate the carbonization of pure compounds, mixtures of the pure compounds, and complex feedstocks comprised of the pure compounds (e.g., paper, wood). Results indicate that feedstock properties do influence carbonization product properties. Carbonization product characteristics were predicted using results from the carbonization of the pure compounds and indicate that recovered solids energy contents are more accurately predicted than solid yields and the carbon mass in each phase, while predictions associated with solids surface functional groups are more difficult to predict using this linear approach. To more accurately predict carbonization products, it may be necessary to account for feedstock structure and/or additional feedstock properties. Copyright © 2014 Elsevier Ltd. All rights reserved.
Prediction of beef carcass salable yield and trimmable fat using bioelectrical impedance analysis.
Zollinger, B L; Farrow, R L; Lawrence, T E; Latman, N S
2010-03-01
Bioelectrical impedance technology (BIA) is capable of providing an objective method of beef carcass yield estimation with the rapidity of yield grading. Electrical resistance (Rs), reactance (Xc), impedance (I), hot carcass weight (HCW), fat thickness between the 12th and 13th ribs (FT), estimated percentage kidney, pelvic, and heart fat (KPH%), longissimus muscle area (LMA), length between electrodes (LGE) as well as three derived carcass values that included electrical volume (EVOL), reactive density (XcD), and resistive density (RsD) were determined for the carcasses of 41 commercially fed cattle. Carcasses were subsequently fabricated into salable beef products reflective of industry standards. Equations were developed to predict percentage salable carcass yield (SY%) and percentage trimmable fat (FT%). Resulting equations accounted for 81% and 84% of variation in SY% and FT%, respectively. These results indicate that BIA technology is an accurate predictor of beef carcass composition. Copyright 2009 Elsevier Ltd. All rights reserved.
A new hydrodynamic prediction of the peak heat flux from horizontal cylinders in low speed upflow
NASA Technical Reports Server (NTRS)
Ungar, E. K.; Eichhorn, R.
1988-01-01
Flow-boiling data have been obtained for horizontal cylinders in saturated acetone, isopropanol, and water, yielding heat flux vs. wall superheat boiling curves for the organic liquids. A region of low speed upflow is identified in which long cylindrical bubbles break off from the wake with regular frequency. The Strouhal number of bubble breakoff is a function only of the Froude number in any liquid, and the effective wake thickness in all liquids is a function of the density ratio and the Froude number. A low speed flow boiling burnout prediction procedure is presented which yields accurate results in widely dissimilar liquids.
Melon yield prediction using small unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Zhao, Tiebiao; Wang, Zhongdao; Yang, Qi; Chen, YangQuan
2017-05-01
Thanks to the development of camera technologies, small unmanned aerial systems (sUAS), it is possible to collect aerial images of field with more flexible visit, higher resolution and much lower cost. Furthermore, the performance of objection detection based on deeply trained convolutional neural networks (CNNs) has been improved significantly. In this study, we applied these technologies in the melon production, where high-resolution aerial images were used to count melons in the field and predict the yield. CNN-based object detection framework-Faster R-CNN is applied in the melon classification. Our results showed that sUAS plus CNNs were able to detect melons accurately in the late harvest season.
Quantitative self-assembly prediction yields targeted nanomedicines
NASA Astrophysics Data System (ADS)
Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.
2018-02-01
Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.
Water and wastewater infrastructure systems represent a major capital investment; utilities must ensure they are getting the highest yield possible on their investment, both in terms of dollars and water quality. Accurate information related to equipment, pipe characteristics, l...
Water and wastewater infrastructure systems represent a major capital investment; utilities must ensure they are getting the highest yield possible on their investment, both in terms of dollars and water quality. Accurate information related to equipment, pipe characteristics, lo...
De Buck, Stefan S; Sinha, Vikash K; Fenu, Luca A; Gilissen, Ron A; Mackie, Claire E; Nijsen, Marjoleen J
2007-04-01
The aim of this study was to assess a physiologically based modeling approach for predicting drug metabolism, tissue distribution, and bioavailability in rat for a structurally diverse set of neutral and moderate-to-strong basic compounds (n = 50). Hepatic blood clearance (CL(h)) was projected using microsomal data and shown to be well predicted, irrespective of the type of hepatic extraction model (80% within 2-fold). Best predictions of CL(h) were obtained disregarding both plasma and microsomal protein binding, whereas strong bias was seen using either blood binding only or both plasma and microsomal protein binding. Two mechanistic tissue composition-based equations were evaluated for predicting volume of distribution (V(dss)) and tissue-to-plasma partitioning (P(tp)). A first approach, which accounted for ionic interactions with acidic phospholipids, resulted in accurate predictions of V(dss) (80% within 2-fold). In contrast, a second approach, which disregarded ionic interactions, was a poor predictor of V(dss) (60% within 2-fold). The first approach also yielded accurate predictions of P(tp) in muscle, heart, and kidney (80% within 3-fold), whereas in lung, liver, and brain, predictions ranged from 47% to 62% within 3-fold. Using the second approach, P(tp) prediction accuracy in muscle, heart, and kidney was on average 70% within 3-fold, and ranged from 24% to 54% in all other tissues. Combining all methods for predicting V(dss) and CL(h) resulted in accurate predictions of the in vivo half-life (70% within 2-fold). Oral bioavailability was well predicted using CL(h) data and Gastroplus Software (80% within 2-fold). These results illustrate that physiologically based prediction tools can provide accurate predictions of rat pharmacokinetics.
Rising temperatures reduce global wheat production
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.; Reynolds, M. P.; Alderman, P. D.; Prasad, P. V. V.; Aggarwal, P. K.; Anothai, J.; Basso, B.; Biernath, C.; Challinor, A. J.; de Sanctis, G.; Doltra, J.; Fereres, E.; Garcia-Vila, M.; Gayler, S.; Hoogenboom, G.; Hunt, L. A.; Izaurralde, R. C.; Jabloun, M.; Jones, C. D.; Kersebaum, K. C.; Koehler, A.-K.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Palosuo, T.; Priesack, E.; Eyshi Rezaei, E.; Ruane, A. C.; Semenov, M. A.; Shcherbak, I.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Thorburn, P. J.; Waha, K.; Wang, E.; Wallach, D.; Wolf, J.; Zhao, Z.; Zhu, Y.
2015-02-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 °C to 32 °C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each °C of further temperature increase and become more variable over space and time.
Rising Temperatures Reduce Global Wheat Production
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.;
2015-01-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 degrees C to 32? degrees C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each degree C of further temperature increase and become more variable over space and time.
A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants
Ewen, James P.; Gattinoni, Chiara; Thakkar, Foram M.; Morgan, Neal; Spikes, Hugh A.; Dini, Daniele
2016-01-01
For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n-hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n-hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n-hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed. PMID:28773773
A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants.
Ewen, James P; Gattinoni, Chiara; Thakkar, Foram M; Morgan, Neal; Spikes, Hugh A; Dini, Daniele
2016-08-02
For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n -hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n -hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n -hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Yield estimation of corn with multispectral data and the potential of using imaging spectrometers
NASA Astrophysics Data System (ADS)
Bach, Heike
1997-05-01
In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.
USDA-ARS?s Scientific Manuscript database
Reliable, precise and accurate estimates of disease severity are important for predicting yield loss, monitoring and forecasting epidemics, for assessing crop germplasm for disease resistance, and for understanding fundamental biological processes including co-evolution. In some situations poor qual...
Cannell, R C; Tatum, J D; Belk, K E; Wise, J W; Clayton, R P; Smith, G C
1999-11-01
An improved ability to quantify differences in the fabrication yields of beef carcasses would facilitate the application of value-based marketing. This study was conducted to evaluate the ability of the Dual-Component Australian VIASCAN to 1) predict fabricated beef subprimal yields as a percentage of carcass weight at each of three fat-trim levels and 2) augment USDA yield grading, thereby improving accuracy of grade placement. Steer and heifer carcasses (n = 240) were evaluated using VIASCAN, as well as by USDA expert and online graders, before fabrication of carcasses to each of three fat-trim levels. Expert yield grade (YG), online YG, VIASCAN estimates, and VIASCAN estimated ribeye area used to augment actual and expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, and hot carcass weight), respectively, 1) accounted for 51, 37, 46, and 55% of the variation in fabricated yields of commodity-trimmed subprimals, 2) accounted for 74, 54, 66, and 75% of the variation in fabricated yields of closely trimmed subprimals, and 3) accounted for 74, 54, 71, and 75% of the variation in fabricated yields of very closely trimmed subprimals. The VIASCAN system predicted fabrication yields more accurately than current online yield grading and, when certain VIASCAN-measured traits were combined with some USDA yield grade factors in an augmentation system, the accuracy of cutability prediction was improved, at packing plant line speeds, to a level matching that of expert graders applying grades at a comfortable rate.
Accurate interatomic force fields via machine learning with covariant kernels
NASA Astrophysics Data System (ADS)
Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro
2017-06-01
We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
Annual Corn Yield Estimation through Multi-temporal MODIS Data
NASA Astrophysics Data System (ADS)
Shao, Y.; Zheng, B.; Campbell, J. B.
2013-12-01
This research employed 13 years of the Moderate Resolution Imaging Spectroradiometer (MODIS) to estimate annual corn yield for the Midwest of the United States. The overall objective of this study was to examine if annual corn yield could be accurately predicted using MODIS time-series NDVI (Normalized Difference Vegetation Index) and ancillary data such monthly precipitation and temperature. MODIS-NDVI 16-Day composite images were acquired from the USGS EROS Data Center for calendar years 2000 to 2012. For the same time-period, county level corn yield statistics were obtained from the National Agricultural Statistics Service (NASS). The monthly precipitation and temperature measures were derived from Precipitation-Elevation Regressions on Independent Slopes Model (PRISM) climate data. A cropland mask was derived using 2006 National Land Cover Database. For each county and within the cropland mask, the MODIS-NDVI time-series data and PRISM climate data were spatially averaged, at their respective time steps. We developed a random forest predictive model with the MODIS-NDVI and climate data as predictors and corn yield as response. To assess the model accuracy, we used twelve years of data as training and the remaining year as hold-out testing set. The training and testing procedures were repeated 13 times. The R2 ranged from 0.72 to 0.83 for testing years. It was also found that the inclusion of climate data did not improve the model predictive performance. MODIS-NDVI time-series data alone might provide sufficient information for county level corn yield prediction.
The ability of video image analysis to predict lean meat yield and EUROP score of lamb carcasses.
Einarsson, E; Eythórsdóttir, E; Smith, C R; Jónmundsson, J V
2014-07-01
A total of 862 lamb carcasses that were evaluated by both the VIAscan® and the current EUROP classification system were deboned and the actual yield was measured. Models were derived for predicting lean meat yield of the legs (Leg%), loin (Loin%) and shoulder (Shldr%) using the best VIAscan® variables selected by stepwise regression analysis of a calibration data set (n=603). The equations were tested on validation data set (n=259). The results showed that the VIAscan® predicted lean meat yield in the leg, loin and shoulder with an R 2 of 0.60, 0.31 and 0.47, respectively, whereas the current EUROP system predicted lean yield with an R 2 of 0.57, 0.32 and 0.37, respectively, for the three carcass parts. The VIAscan® also predicted the EUROP score of the trial carcasses, using a model derived from an earlier trial. The EUROP classification from VIAscan® and the current system were compared for their ability to explain the variation in lean yield of the whole carcass (LMY%) and trimmed fat (FAT%). The predicted EUROP scores from the VIAscan® explained 36% of the variation in LMY% and 60% of the variation in FAT%, compared with the current EUROP system that explained 49% and 72%, respectively. The EUROP classification obtained by the VIAscan® was tested against a panel of three expert classifiers (n=696). The VIAscan® classification agreed with 82% of conformation and 73% of the fat classes assigned by a panel of expert classifiers. It was concluded that VIAscan® provides a technology that can directly predict LMY% of lamb carcasses with more accuracy than the current EUROP classification system. The VIAscan® is also capable of classifying lamb carcasses into EUROP classes with an accuracy that fulfils minimum demands for the Icelandic sheep industry. Although the VIAscan® prediction of the Loin% is low, it is comparable to the current EUROP system, and should not hinder the adoption of the technology to estimate the yield of Icelandic lambs as it delivered a more accurate prediction for the Leg%, Shldr% and overall LMY% with negligible prediction bias.
NASA Astrophysics Data System (ADS)
Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.
2018-04-01
Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.
A link prediction approach to cancer drug sensitivity prediction.
Turki, Turki; Wei, Zhi
2017-10-03
Predicting the response to a drug for cancer disease patients based on genomic information is an important problem in modern clinical oncology. This problem occurs in part because many available drug sensitivity prediction algorithms do not consider better quality cancer cell lines and the adoption of new feature representations; both lead to the accurate prediction of drug responses. By predicting accurate drug responses to cancer, oncologists gain a more complete understanding of the effective treatments for each patient, which is a core goal in precision medicine. In this paper, we model cancer drug sensitivity as a link prediction, which is shown to be an effective technique. We evaluate our proposed link prediction algorithms and compare them with an existing drug sensitivity prediction approach based on clinical trial data. The experimental results based on the clinical trial data show the stability of our link prediction algorithms, which yield the highest area under the ROC curve (AUC) and are statistically significant. We propose a link prediction approach to obtain new feature representation. Compared with an existing approach, the results show that incorporating the new feature representation to the link prediction algorithms has significantly improved the performance.
NASA Astrophysics Data System (ADS)
Bach, Heike
1998-07-01
In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.
Prediction of County-Level Corn Yields Using an Energy-Crop Growth Index.
NASA Astrophysics Data System (ADS)
Andresen, Jeffrey A.; Dale, Robert F.; Fletcher, Jerald J.; Preckel, Paul V.
1989-01-01
Weather conditions significantly affect corn yields. while weather remains as the major uncontrolled variable in crop production, an understanding of the influence of weather on yields can aid in early and accurate assessment of the impact of weather and climate on crop yields and allow for timely agricultural extension advisories to help reduce farm management costs and improve marketing, decisions. Based on data for four representative countries in Indiana from 1960 to 1984 (excluding 1970 because of the disastrous southern corn leaf blight), a model was developed to estimate corn (Zea mays L.) yields as a function of several composite soil-crop-weather variables and a technology-trend marker, applied nitrogen fertilizer (N). The model was tested by predicting corn yields for 15 other counties. A daily energy-crop growth (ECG) variable in which different weights were used for the three crop-weather variables which make up the daily ECG-solar radiation intercepted by the canopy, a temperature function, and the ratio of actual to potential evapotranspiration-performed better than when the ECG components were weighted equally. The summation of the weighted daily ECG over a relatively short period (36 days spanning silk) was found to provide the best index for predicting county average corn yield. Numerical estimation results indicate that the ratio of actual to potential evapotranspiration (ET/PET) is much more important than the other two ECG factors in estimating county average corn yield in Indiana.
Specific energy yield comparison between crystalline silicon and amorphous silicon based PV modules
NASA Astrophysics Data System (ADS)
Ferenczi, Toby; Stern, Omar; Hartung, Marianne; Mueggenburg, Eike; Lynass, Mark; Bernal, Eva; Mayer, Oliver; Zettl, Marcus
2009-08-01
As emerging thin-film PV technologies continue to penetrate the market and the number of utility scale installations substantially increase, detailed understanding of the performance of the various PV technologies becomes more important. An accurate database for each technology is essential for precise project planning, energy yield prediction and project financing. However recent publications showed that it is very difficult to get accurate and reliable performance data of theses technologies. This paper evaluates previously reported claims the amorphous silicon based PV modules have a higher annual energy yield compared to crystalline silicon modules relative to their rated performance. In order to acquire a detailed understanding of this effect, outdoor module tests were performed at GE Global Research Center in Munich. In this study we examine closely two of the five reported factors that contribute to enhanced energy yield of amorphous silicon modules. We find evidence to support each of these factors and evaluate their relative significance. We discuss aspects for improvement in how PV modules are sold and identify areas for further study further study.
Predictive Monitoring for Improved Management of Glucose Levels
Reifman, Jaques; Rajaraman, Srinivasan; Gribok, Andrei; Ward, W. Kenneth
2007-01-01
Background Recent developments and expected near-future improvements in continuous glucose monitoring (CGM) devices provide opportunities to couple them with mathematical forecasting models to produce predictive monitoring systems for early, proactive glycemia management of diabetes mellitus patients before glucose levels drift to undesirable levels. This article assesses the feasibility of data-driven models to serve as the forecasting engine of predictive monitoring systems. Methods We investigated the capabilities of data-driven autoregressive (AR) models to (1) capture the correlations in glucose time-series data, (2) make accurate predictions as a function of prediction horizon, and (3) be made portable from individual to individual without any need for model tuning. The investigation is performed by employing CGM data from nine type 1 diabetic subjects collected over a continuous 5-day period. Results With CGM data serving as the gold standard, AR model-based predictions of glucose levels assessed over nine subjects with Clarke error grid analysis indicated that, for a 30-minute prediction horizon, individually tuned models yield 97.6 to 100.0% of data in the clinically acceptable zones A and B, whereas cross-subject, portable models yield 95.8 to 99.7% of data in zones A and B. Conclusions This study shows that, for a 30-minute prediction horizon, data-driven AR models provide sufficiently-accurate and clinically-acceptable estimates of glucose levels for timely, proactive therapy and should be considered as the modeling engine for predictive monitoring of patients with type 1 diabetes mellitus. It also suggests that AR models can be made portable from individual to individual with minor performance penalties, while greatly reducing the burden associated with model tuning and data collection for model development. PMID:19885110
Simulation-Based Height of Burst Map for Asteroid Airburst Damage Prediction
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Mathias, Donovan L.; Tarano, Ana M.
2017-01-01
Entry and breakup models predict that airburst in the Earth's atmosphere is likely for asteroids up to approximately 200 meters in diameter. Objects of this size can deposit over 250 megatons of energy into the atmosphere. Fast-running ground damage prediction codes for such events rely heavily upon methods developed from nuclear weapons research to estimate the damage potential for an airburst at altitude. (Collins, 2005; Mathias, 2017; Hills and Goda, 1993). In particular, these tools rely upon the powerful yield scaling laws developed for point-source blasts that are used in conjunction with a Height of Burst (HOB) map to predict ground damage for an airburst of a specific energy at a given altitude. While this approach works extremely well for yields as large as tens of megatons, it becomes less accurate as yields increase to the hundreds of megatons potentially released by larger airburst events. This study revisits the assumptions underlying this approach and shows how atmospheric buoyancy becomes important as yield increases beyond a few megatons. We then use large-scale three-dimensional simulations to construct numerically generated height of burst maps that are appropriate at the higher energy levels associated with the entry of asteroids with diameters of hundreds of meters. These numerically generated HOB maps can then be incorporated into engineering methods for damage prediction, significantly improving their accuracy for asteroids with diameters greater than 80-100 m.
NASA Astrophysics Data System (ADS)
Sankey, J. B.; Kreitler, J.; McVay, J.; Hawbaker, T. J.; Vaillant, N.; Lowe, S. E.
2014-12-01
Wildland fire is a primary threat to watersheds that can impact water supply through increased sedimentation, water quality decline, and change the timing and amount of runoff leading to increased risk from flood and sediment natural hazards. It is of great societal importance in the western USA and throughout the world to improve understanding of how changing fire frequency, extent, and location, in conjunction with fuel treatments will affect watersheds and the ecosystem services they supply to communities. In this work we assess the utility of the InVEST Sediment Retention Model to accurately characterize vulnerability of burned watersheds to erosion and sedimentation. The InVEST tools are GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., RUSLE -Revised Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. We evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured post-fire sedimentation rates available for many watersheds in different rainfall regimes throughout the western USA from an existing, large USGS database of post-fire sediment yield [synthesized in Moody J, Martin D (2009) Synthesis of sediment yields after wildland fire in different rainfall regimes in the western United States. International Journal of Wildland Fire 18: 96-115]. The ultimate goal of this work is to calibrate and implement the model to accurately predict variability in post-fire sediment yield as a function of future landscape heterogeneity predicted by wildfire simulations, and future landscape fuel treatment scenarios, within watersheds.
Kelly, Nicola; McGarry, J Patrick
2012-05-01
The inelastic pressure dependent compressive behaviour of bovine trabecular bone is investigated through experimental and computational analysis. Two loading configurations are implemented, uniaxial and confined compression, providing two distinct loading paths in the von Mises-pressure stress plane. Experimental results reveal distinctive yielding followed by a constant nominal stress plateau for both uniaxial and confined compression. Computational simulation of the experimental tests using the Drucker-Prager and Mohr-Coulomb plasticity models fails to capture the confined compression behaviour of trabecular bone. The high pressure developed during confined compression does not result in plastic deformation using these formulations, and a near elastic response is computed. In contrast, the crushable foam plasticity models provide accurate simulation of the confined compression tests, with distinctive yield and plateau behaviour being predicted. The elliptical yield surfaces of the crushable foam formulations in the von Mises-pressure stress plane accurately characterise the plastic behaviour of trabecular bone. Results reveal that the hydrostatic yield stress is equal to the uniaxial yield stress for trabecular bone, demonstrating the importance of accurate characterisation and simulation of the pressure dependent plasticity. It is also demonstrated in this study that a commercially available trabecular bone analogue material, cellular rigid polyurethane foam, exhibits similar pressure dependent yield behaviour, despite having a lower stiffness and strength than trabecular bone. This study provides a novel insight into the pressure dependent yield behaviour of trabecular bone, demonstrating the inadequacy of uniaxial testing alone. For the first time, crushable foam plasticity formulations are implemented for trabecular bone. The enhanced understanding of the inelastic behaviour of trabecular bone established in this study will allow for more realistic simulation of orthopaedic device implantation and failure. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jha, Prakash K.; Athanasiadis, Panos; Gualdi, Silvio; Trabucco, Antonio; Mereu, Valentina; Shelia, Vakhtang; Hoogenboom, Gerrit
2018-03-01
Ensemble forecasts from dynamic seasonal prediction systems (SPSs) have the potential to improve decision-making for crop management to help cope with interannual weather variability. Because the reliability of crop yield predictions based on seasonal weather forecasts depends on the quality of the forecasts, it is essential to evaluate forecasts prior to agricultural applications. This study analyses the potential of Climate Forecast System version 2 (CFSv2) in predicting the Indian summer monsoon (ISM) for producing meteorological variables relevant to crop modeling. The focus area was Nepal's Terai region, and the local hindcasts were compared with weather station and reanalysis data. The results showed that the CFSv2 model accurately predicts monthly anomalies of daily maximum and minimum air temperature (Tmax and Tmin) as well as incoming total surface solar radiation (Srad). However, the daily climatologies of the respective CFSv2 hindcasts exhibit significant systematic biases compared to weather station data. The CFSv2 is less capable of predicting monthly precipitation anomalies and simulating the respective intra-seasonal variability over the growing season. Nevertheless, the observed daily climatologies of precipitation fall within the ensemble spread of the respective daily climatologies of CFSv2 hindcasts. These limitations in the CFSv2 seasonal forecasts, primarily in precipitation, restrict the potential application for predicting the interannual variability of crop yield associated with weather variability. Despite these limitations, ensemble averaging of the simulated yield using all CFSv2 members after applying bias correction may lead to satisfactory yield predictions.
Absolute dimensions and masses of eclipsing binaries. V. IQ Persei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacy, C.H.; Frueh, M.L.
1985-08-01
New photometric and spectroscopic observations of the 1.7 day eclipsing binary IQ Persei (B8 + A6) have been analyzed to yield very accurate fundamental properties of the system. Reticon spectroscopic observations obtained at McDonald Observatory were used to determine accurate radial velocities of both stars in this slightly eccentric large light-ratio binary. A new set of VR light curves obtained at McDonald Observatory were analyzed by synthesis techniques, and previously published UBV light curves were reanalyzed to yield accurate photometric orbits. Orbital parameters derived from both sets of photometric observations are in excellent agreement. The absolute dimensions, masses, luminosities, andmore » apsidal motion period (140 yr) derived from these observations agree well with the predictions of theoretical stellar evolution models. The A6 secondary is still very close to the zero-age main sequence. The B8 primary is about one-third of the way through its main-sequence evolution. 27 references.« less
Diameter growth of subtropical trees in Puerto Rico
Thomas J. Brandeis
2009-01-01
Puerto Ricoâs forests consist of young, secondary stands still recovering from a long history of island-wide deforestation that largely abated in the mid-20th century. Limited knowledge about growth rates of subtropical tree species in these forests makes it difficult to accurately predict forest yield, biomass accumulation, and carbon...
An Anisotropic Hardening Model for Springback Prediction
NASA Astrophysics Data System (ADS)
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
Predicting cotton yield of small field plots in a cotton breeding program using UAV imagery data
NASA Astrophysics Data System (ADS)
Maja, Joe Mari J.; Campbell, Todd; Camargo Neto, Joao; Astillo, Philip
2016-05-01
One of the major criteria used for advancing experimental lines in a breeding program is yield performance. Obtaining yield performance data requires machine picking each plot with a cotton picker, modified to weigh individual plots. Harvesting thousands of small field plots requires a great deal of time and resources. The efficiency of cotton breeding could be increased significantly while the cost could be decreased with the availability of accurate methods to predict yield performance. This work is investigating the feasibility of using an image processing technique using a commercial off-the-shelf (COTS) camera mounted on a small Unmanned Aerial Vehicle (sUAV) to collect normal RGB images in predicting cotton yield on small plot. An orthonormal image was generated from multiple images and used to process multiple, segmented plots. A Gaussian blur was used to eliminate the high frequency component of the images, which corresponds to the cotton pixels, and used image subtraction technique to generate high frequency pixel images. The cotton pixels were then separated using k-means cluster with 5 classes. Based on the current work, the calculated percentage cotton area was computed using the generated high frequency image (cotton pixels) divided by the total area of the plot. Preliminary results showed (five flights, 3 altitudes) that cotton cover on multiple pre-selected 227 sq. m. plots produce an average of 8% which translate to approximately 22.3 kgs. of cotton. The yield prediction equation generated from the test site was then use on a separate validation site and produced a prediction error of less than 10%. In summary, the results indicate that a COTS camera with an appropriate image processing technique can produce results that are comparable to expensive sensors.
Spectral estimates of intercepted solar radiation by corn and soybean canopies
NASA Technical Reports Server (NTRS)
Gallo, K. P.; Brooks, C. C.; Daughtry, C. S. T.; Bauer, M. E.; Vanderbilt, V. C.
1982-01-01
Attention is given to the development of methods for combining spectral and meteorological data in crop yield models which are capable of providing accurate estimates of crop condition and yields throughout the growing season. The present investigation is concerned with initial tests of these concepts using spectral and agronomic data acquired in controlled experiments. The data were acquired at the Purdue University Agronomy Farm, 10 km northwest of West Lafayette, Indiana. Data were obtained throughout several growing seasons for corn and soybeans. Five methods or models for predicting yields were examined. On the basis of the obtained results, it is concluded that estimating intercepted solar radiation using spectral data is a viable approach for merging spectral and meteorological data in crop yield models.
Quantitation of Staphylococcus aureus in Seawater Using CHROMagar™ SA
Pombo, David; Hui, Jennifer; Kurano, Michelle; Bankowski, Matthew J; Seifried, Steven E
2010-01-01
A microbiological algorithm has been developed to analyze beach water samples for the determination of viable colony forming units (CFU) of Staphylococcus aureus (S. aureus). Membrane filtration enumeration of S. aureus from recreational beach waters using the chromogenic media CHROMagar™SA alone yields a positive predictive value (PPV) of 70%. Presumptive CHROMagar™SA colonies were confirmed as S. aureus by 24-hour tube coagulase test. Combined, these two tests yield a PPV of 100%. This algorithm enables accurate quantitation of S. aureus in seawater in 72 hours and could support risk-prediction processes for recreational waters. A more rapid protocol, utilizing a 4-hour tube coagulase confirmatory test, enables a 48-hour turnaround time with a modest false negative rate of less than 10%. PMID:20222490
Hyperspectral sensing to detect the impact of herbicide drift on cotton growth and yield
NASA Astrophysics Data System (ADS)
Suarez, L. A.; Apan, A.; Werth, J.
2016-10-01
Yield loss in crops is often associated with plant disease or external factors such as environment, water supply and nutrient availability. Improper agricultural practices can also introduce risks into the equation. Herbicide drift can be a combination of improper practices and environmental conditions which can create a potential yield loss. As traditional assessment of plant damage is often imprecise and time consuming, the ability of remote and proximal sensing techniques to monitor various bio-chemical alterations in the plant may offer a faster, non-destructive and reliable approach to predict yield loss caused by herbicide drift. This paper examines the prediction capabilities of partial least squares regression (PLS-R) models for estimating yield. Models were constructed with hyperspectral data of a cotton crop sprayed with three simulated doses of the phenoxy herbicide 2,4-D at three different growth stages. Fibre quality, photosynthesis, conductance, and two main hormones, indole acetic acid (IAA) and abscisic acid (ABA) were also analysed. Except for fibre quality and ABA, Spearman correlations have shown that these variables were highly affected by the chemical. Four PLS-R models for predicting yield were developed according to four timings of data collection: 2, 7, 14 and 28 days after the exposure (DAE). As indicated by the model performance, the analysis revealed that 7 DAE was the best time for data collection purposes (RMSEP = 2.6 and R2 = 0.88), followed by 28 DAE (RMSEP = 3.2 and R2 = 0.84). In summary, the results of this study show that it is possible to accurately predict yield after a simulated herbicide drift of 2,4-D on a cotton crop, through the analysis of hyperspectral data, thereby providing a reliable, effective and non-destructive alternative based on the internal response of the cotton leaves.
New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.
NASA Astrophysics Data System (ADS)
Ogawa, Tatsuhiko; Hashimoto, Shintaro; Sato, Tatsuhiko; Niita, Koji
2014-06-01
A new nuclear de-excitation model, intended for accurate simulation of isomeric transition of excited nuclei, was incorporated into PHITS and applied to various situations to clarify the impact of the model. The case studies show that precise treatment of gamma de-excitation and consideration for isomer production are important for various applications such as detector performance prediction, radiation shielding calculations and the estimation of radioactive inventory including isomers.
Yue, Zheng-Bo; Zhang, Meng-Lin; Sheng, Guo-Ping; Liu, Rong-Hua; Long, Ying; Xiang, Bing-Ren; Wang, Jin; Yu, Han-Qing
2010-04-01
A near-infrared-reflectance (NIR) spectroscopy-based method is established to determine the main components of aquatic plants as well as their anaerobic rumen biodegradability. The developed method is more rapid and accurate compared to the conventional chemical analysis and biodegradability tests. Moisture, volatile solid, Klason lignin and ash in entire aquatic plants could be accurately predicted using this method with coefficient of determination (r(2)) values of 0.952, 0.916, 0.939 and 0.950, respectively. In addition, the anaerobic rumen biodegradability of aquatic plants, represented as biogas and methane yields, could also be predicted well. The algorithm of continuous wavelet transform for the NIR spectral data pretreatment is able to greatly enhance the robustness and predictive ability of the NIR spectral analysis. These results indicate that NIR spectroscopy could be used to predict the main components of aquatic plants and their anaerobic biodegradability. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Multiple-Instance Regression with Structured Data
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran; Roper, Alex
2008-01-01
We present a multiple-instance regression algorithm that models internal bag structure to identify the items most relevant to the bag labels. Multiple-instance regression (MIR) operates on a set of bags with real-valued labels, each containing a set of unlabeled items, in which the relevance of each item to its bag label is unknown. The goal is to predict the labels of new bags from their contents. Unlike previous MIR methods, MI-ClusterRegress can operate on bags that are structured in that they contain items drawn from a number of distinct (but unknown) distributions. MI-ClusterRegress simultaneously learns a model of the bag's internal structure, the relevance of each item, and a regression model that accurately predicts labels for new bags. We evaluated this approach on the challenging MIR problem of crop yield prediction from remote sensing data. MI-ClusterRegress provided predictions that were more accurate than those obtained with non-multiple-instance approaches or MIR methods that do not model the bag structure.
Development of a recursion RNG-based turbulence model
NASA Technical Reports Server (NTRS)
Zhou, YE; Vahala, George; Thangam, S.
1993-01-01
Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.
Modeling Actual Evapotranspiration From Forested Watersheds Across the Southeastern United States
Jianbiao Lu; Ge Sun; Steven G. McNulty; Devendra M. Amatya
2003-01-01
About 50 to 80 percent of precipitation in the southeastern United States returns to the atmosphere by evapotranspiration. As evapotranspiration is a major component in the forest water balances, accurately quantifying it is critical to predicting the effects of forest management and global change on water, sediment, and nutrient yield from forested watersheds. However...
Diameter growth of subtropical trees in Puerto Rico
Thomas J. Brandeis
2009-01-01
Puerto Ricoâs forests consist of young, secondary stands still recovering from a long history of island-wide deforestation that largely abated in the mid-20th century. Limited knowledge about growth rates of subtropical tree species in these forests makes it difficult to accurately predict forest yield, biomass accumulation, and carbon sequestration. This study...
Ren, Jianqiang; Chen, Zhongxin; Tang, Huajun
2006-12-01
Taking Jining City of Shandong Province, one of the most important winter wheat production regions in Huanghuaihai Plain as an example, the winter wheat yield was estimated by using the 250 m MODIS-NDVI data smoothed by Savitzky-Golay filter. The NDVI values between 0. 20 and 0. 80 were selected, and the sum of NDVI value for each county was calculated to build its relation with winter wheat yield. By using stepwise regression method, the linear regression model between NDVI and winter wheat yield was established, with the precision validated by the ground survey data. The results showed that the relative error of predicted yield was between -3.6% and 3.9%, suggesting that the method was relatively accurate and feasible.
Chan, Chung-Hung; Yusoff, Rozita; Ngoh, Gek-Cheng
2013-09-01
A modeling technique based on absorbed microwave energy was proposed to model microwave-assisted extraction (MAE) of antioxidant compounds from cocoa (Theobroma cacao L.) leaves. By adapting suitable extraction model at the basis of microwave energy absorbed during extraction, the model can be developed to predict extraction profile of MAE at various microwave irradiation power (100-600 W) and solvent loading (100-300 ml). Verification with experimental data confirmed that the prediction was accurate in capturing the extraction profile of MAE (R-square value greater than 0.87). Besides, the predicted yields from the model showed good agreement with the experimental results with less than 10% deviation observed. Furthermore, suitable extraction times to ensure high extraction yield at various MAE conditions can be estimated based on absorbed microwave energy. The estimation is feasible as more than 85% of active compounds can be extracted when compared with the conventional extraction technique. Copyright © 2013 Elsevier Ltd. All rights reserved.
Forecasting volcanic air pollution in Hawaii: Tests of time series models
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2012-12-01
Volcanic air pollution, known as vog (volcanic smog) has recently become a major issue in the Hawaiian islands. Vog is caused when volcanic gases react with oxygen and water vapor. It consists of a mixture of gases and aerosols, which include sulfur dioxide and other sulfates. The source of the volcanic gases is the continuing eruption of Mount Kilauea. This paper studies predicting vog using statistical methods. The data sets include time series for SO2 and SO4, over locations spanning the west, south and southeast coasts of Hawaii, and the city of Hilo. The forecasting models include regressions and neural networks, and a frequency domain algorithm. The most typical pattern for the SO2 data is for the frequency domain method to yield the most accurate forecasts over the first few hours, and at the 24 h horizon. The neural net places second. For the SO4 data, the results are less consistent. At two sites, the neural net generally yields the most accurate forecasts, except at the 1 and 24 h horizons, where the frequency domain technique wins narrowly. At one site, the neural net and the frequency domain algorithm yield comparable errors over the first 5 h, after which the neural net dominates. At the remaining site, the frequency domain method is more accurate over the first 4 h, after which the neural net achieves smaller errors. For all the series, the average errors are well within one standard deviation of the actual data at all the horizons. However, the errors also show irregular outliers. In essence, the models capture the central tendency of the data, but are less effective in predicting the extreme events.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.
2016-01-01
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619
Can the electronegativity equalization method predict spectroscopic properties?
Verstraelen, T; Bultinck, P
2015-02-05
The electronegativity equalization method is classically used as a method allowing the fast generation of atomic charges using a set of calibrated parameters and provided knowledge of the molecular structure. Recently, it has started being used for the calculation of other reactivity descriptors and for the development of polarizable and reactive force fields. For such applications, it is of interest to know whether the method, through the inclusion of the molecular geometry in the Taylor expansion of the energy, would also allow sufficiently accurate predictions of spectroscopic data. In this work, relevant quantities for IR spectroscopy are considered, namely the dipole derivatives and the Cartesian Hessian. Despite careful calibration of parameters for this specific task, it is shown that the current models yield insufficiently accurate results. Copyright © 2013 Elsevier B.V. All rights reserved.
Ehrhardt, Fiona; Soussana, Jean-François; Bellocchi, Gianni; Grace, Peter; McAuliffe, Russel; Recous, Sylvie; Sándor, Renáta; Smith, Pete; Snow, Val; de Antoni Migliorati, Massimiliano; Basso, Bruno; Bhatia, Arti; Brilli, Lorenzo; Doltra, Jordi; Dorich, Christopher D; Doro, Luca; Fitton, Nuala; Giacomini, Sandro J; Grant, Brian; Harrison, Matthew T; Jones, Stephanie K; Kirschbaum, Miko U F; Klumpp, Katja; Laville, Patricia; Léonard, Joël; Liebig, Mark; Lieffering, Mark; Martin, Raphaël; Massad, Raia S; Meier, Elizabeth; Merbold, Lutz; Moore, Andrew D; Myrgiotis, Vasileios; Newton, Paul; Pattey, Elizabeth; Rolinski, Susanne; Sharp, Joanna; Smith, Ward N; Wu, Lianhai; Zhang, Qing
2018-02-01
Simulation models are extensively used to predict agricultural productivity and greenhouse gas emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multi-species agricultural contexts. We report an international model comparison and benchmarking exercise, showing the potential of multi-model ensembles to predict productivity and nitrous oxide (N 2 O) emissions for wheat, maize, rice and temperate grasslands. Using a multi-stage modelling protocol, from blind simulations (stage 1) to partial (stages 2-4) and full calibration (stage 5), 24 process-based biogeochemical models were assessed individually or as an ensemble against long-term experimental data from four temperate grassland and five arable crop rotation sites spanning four continents. Comparisons were performed by reference to the experimental uncertainties of observed yields and N 2 O emissions. Results showed that across sites and crop/grassland types, 23%-40% of the uncalibrated individual models were within two standard deviations (SD) of observed yields, while 42 (rice) to 96% (grasslands) of the models were within 1 SD of observed N 2 O emissions. At stage 1, ensembles formed by the three lowest prediction model errors predicted both yields and N 2 O emissions within experimental uncertainties for 44% and 33% of the crop and grassland growth cycles, respectively. Partial model calibration (stages 2-4) markedly reduced prediction errors of the full model ensemble E-median for crop grain yields (from 36% at stage 1 down to 4% on average) and grassland productivity (from 44% to 27%) and to a lesser and more variable extent for N 2 O emissions. Yield-scaled N 2 O emissions (N 2 O emissions divided by crop yields) were ranked accurately by three-model ensembles across crop species and field sites. The potential of using process-based model ensembles to predict jointly productivity and N 2 O emissions at field scale is discussed. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Lian, Junhe; Shen, Fuhui; Liu, Wenqi; Münstermann, Sebastian
2018-05-01
The constitutive model development has been driven to a very accurate and fine-resolution description of the material behaviour responding to various environmental variable changes. The evolving features of the anisotropic behaviour during deformation, therefore, has drawn particular attention due to its possible impacts on the sheet metal forming industry. An evolving non-associated Hill48 (enHill48) model was recently proposed and applied to the forming limit prediction by coupling with the modified maximum force criterion. On the one hand, the study showed the significance to include the anisotropic evolution for accurate forming limit prediction. On the other hand, it also illustrated that the enHill48 model introduced an instability region that suddenly decreases the formability. Therefore, in this study, an alternative model that is based on the associated flow rule and provides similar anisotropic predictive capability is extended to chapter the evolving effects and further applied to the forming limit prediction. The final results are compared with experimental data as well as the results by enHill48 model.
Pai, Priyadarshini P; Mondal, Sukanta
2016-10-01
Proteins interact with carbohydrates to perform various cellular interactions. Of the many carbohydrate ligands that proteins bind with, mannose constitute an important class, playing important roles in host defense mechanisms. Accurate identification of mannose-interacting residues (MIR) may provide important clues to decipher the underlying mechanisms of protein-mannose interactions during infections. This study proposes an approach using an ensemble of base classifiers for prediction of MIR using their evolutionary information in the form of position-specific scoring matrix. The base classifiers are random forests trained by different subsets of training data set Dset128 using 10-fold cross-validation. The optimized ensemble of base classifiers, MOWGLI, is then used to predict MIR on protein chains of the test data set Dtestset29 which showed a promising performance with 92.0% accurate prediction. An overall improvement of 26.6% in precision was observed upon comparison with the state-of-art. It is hoped that this approach, yielding enhanced predictions, could be eventually used for applications in drug design and vaccine development.
Measurement and prediction of model-rotor flow fields
NASA Technical Reports Server (NTRS)
Owen, F. K.; Tauber, M. E.
1985-01-01
This paper shows that a laser velocimeter can be used to measure accurately the three-component velocities induced by a model rotor at transonic tip speeds. The measurements, which were made at Mach numbers from 0.85 to 0.95 and at zero advance ratio, yielded high-resolution, orthogonal velocity values. The measured velocities were used to check the ability of the ROT22 full-potential rotor code to predict accurately the transonic flow field in the crucial region around and beyond the tip of a high-speed rotor blade. The good agreement between the calculated and measured velocities established the code's ability to predict the off-blade flow field at transonic tip speeds. This supplements previous comparisons in which surface pressures were shown to be well predicted on two different tips at advance ratios to 0.45, especially at the critical 90 deg azimuthal blade position. These results demonstrate that the ROT22 code can be used with confidence to predict the important tip-region flow field, including the occurrence, strength, and location of shock waves causing high drag and noise.
Ferragina, A; Cipolat-Gotet, C; Cecchinato, A; Bittante, G
2013-01-01
Cheese yield is an important technological trait in the dairy industry in many countries. The aim of this study was to evaluate the effectiveness of Fourier-transform infrared (FTIR) spectral analysis of fresh unprocessed milk samples for predicting cheese yield and nutrient recovery traits. A total of 1,264 model cheeses were obtained from 1,500-mL milk samples collected from individual Brown Swiss cows. Individual measurements of 7 new cheese yield-related traits were obtained from the laboratory cheese-making procedure, including the fresh cheese yield, total solid cheese yield, and the water retained in curd, all as a percentage of the processed milk, and nutrient recovery (fat, protein, total solids, and energy) in the curd as a percentage of the same nutrient contained in the milk. All individual milk samples were analyzed using a MilkoScan FT6000 over the spectral range from 5,000 to 900 wavenumber × cm(-1). Two spectral acquisitions were carried out for each sample and the results were averaged before data analysis. Different chemometric models were fitted and compared with the aim of improving the accuracy of the calibration equations for predicting these traits. The most accurate predictions were obtained for total solid cheese yield and fresh cheese yield, which exhibited coefficients of determination between the predicted and measured values in cross-validation (1-VR) of 0.95 and 0.83, respectively. A less favorable result was obtained for water retained in curd (1-VR=0.65). Promising results were obtained for recovered protein (1-VR=0.81), total solids (1-VR=0.86), and energy (1-VR=0.76), whereas recovered fat exhibited a low accuracy (1-VR=0.41). As FTIR spectroscopy is a rapid, cheap, high-throughput technique that is already used to collect standard milk recording data, these FTIR calibrations for cheese yield and nutrient recovery highlight additional potential applications of the technique in the dairy industry, especially for monitoring cheese-making processes and milk payment systems. In addition, the prediction models can be used to provide breeding organizations with information on new phenotypes for cheese yield and milk nutrient recovery, potentially allowing these traits to be enhanced through selection. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Optimising the Encapsulation of an Aqueous Bitter Melon Extract by Spray-Drying
Tan, Sing Pei; Kha, Tuyen Chan; Parks, Sophie; Stathopoulos, Costas; Roach, Paul D.
2015-01-01
Our aim was to optimise the encapsulation of an aqueous bitter melon extract by spray-drying with maltodextrin (MD) and gum Arabic (GA). The response surface methodology models accurately predicted the process yield and retentions of bioactive concentrations and activity (R2 > 0.87). The optimal formulation was predicted and validated as 35% (w/w) stock solution (MD:GA, 1:1) and a ratio of 1.5:1 g/g of the extract to the stock solution. The spray-dried powder had a high process yield (66.2% ± 9.4%) and high retention (>79.5% ± 8.4%) and the quality of the powder was high. Therefore, the bitter melon extract was well encapsulated into a powder using MD/GA and spray-drying. PMID:28231214
Predicting β-Turns in Protein Using Kernel Logistic Regression
Elbashir, Murtada Khalafallah; Sheng, Yu; Wang, Jianxin; Wu, FangXiang; Li, Min
2013-01-01
A β-turn is a secondary protein structure type that plays a significant role in protein configuration and function. On average 25% of amino acids in protein structures are located in β-turns. It is very important to develope an accurate and efficient method for β-turns prediction. Most of the current successful β-turns prediction methods use support vector machines (SVMs) or neural networks (NNs). The kernel logistic regression (KLR) is a powerful classification technique that has been applied successfully in many classification problems. However, it is often not found in β-turns classification, mainly because it is computationally expensive. In this paper, we used KLR to obtain sparse β-turns prediction in short evolution time. Secondary structure information and position-specific scoring matrices (PSSMs) are utilized as input features. We achieved Q total of 80.7% and MCC of 50% on BT426 dataset. These results show that KLR method with the right algorithm can yield performance equivalent to or even better than NNs and SVMs in β-turns prediction. In addition, KLR yields probabilistic outcome and has a well-defined extension to multiclass case. PMID:23509793
Predicting β-turns in protein using kernel logistic regression.
Elbashir, Murtada Khalafallah; Sheng, Yu; Wang, Jianxin; Wu, Fangxiang; Li, Min
2013-01-01
A β-turn is a secondary protein structure type that plays a significant role in protein configuration and function. On average 25% of amino acids in protein structures are located in β-turns. It is very important to develope an accurate and efficient method for β-turns prediction. Most of the current successful β-turns prediction methods use support vector machines (SVMs) or neural networks (NNs). The kernel logistic regression (KLR) is a powerful classification technique that has been applied successfully in many classification problems. However, it is often not found in β-turns classification, mainly because it is computationally expensive. In this paper, we used KLR to obtain sparse β-turns prediction in short evolution time. Secondary structure information and position-specific scoring matrices (PSSMs) are utilized as input features. We achieved Q total of 80.7% and MCC of 50% on BT426 dataset. These results show that KLR method with the right algorithm can yield performance equivalent to or even better than NNs and SVMs in β-turns prediction. In addition, KLR yields probabilistic outcome and has a well-defined extension to multiclass case.
Baskar, Gurunathan; Sathya, Shree Rajesh K
2011-01-01
Statistical and evolutionary optimization of media composition was employed for the production of medicinal exopolysaccharide (EPS) by Lingzhi or Reishi medicinal mushroom Ganoderma lucidium MTCC 1039 using soya bean meal flour as low-cost substrate. Soya bean meal flour, ammonium chloride, glucose, and pH were identified as the most important variables for EPS yield using the two-level Plackett-Burman design and further optimized using the central composite design (CCD) and the artificial neural network (ANN)-linked genetic algorithm (GA). The high value of coefficient of determination of ANN (R² = 0.982) indicates that the ANN model was more accurate than the second-order polynomial model of CCD (R² = 0.91) for representing the effect of media composition on EPS yield. The predicted optimum media composition using ANN-linked GA was soybean meal flour 2.98%, glucose 3.26%, ammonium chloride 0.25%, and initial pH 7.5 for the maximum predicted EPS yield of 1005.55 mg/L. The experimental EPS yield obtained using the predicted optimum media composition was 1012.36 mg/L, which validates the high degree of accuracy of evolutionary optimization for enhanced production of EPS by submerged fermentation of G. lucidium.
Growth and yield model application in tropical rain forest management
James Atta-Boateng; John W., Jr. Moser
2000-01-01
Analytical tools are needed to evaluate the impact of management policies on the sustainable use of rain forest. Optimal decisions concerning the level of management inputs require accurate predictions of output at all relevant input levels. Using growth data from 40 l-hectare permanent plots obtained from the semi-deciduous forest of Ghana, a system of 77 differential...
Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation
Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon
2005-01-01
Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRET-induced signal, and for differences in the detection efficiency and quantum yield of the probes. ALEX yields accurate FRET independent of instrumental factors, such as excitation intensity or detector alignment. Using DNA fragments, we showed that ALEX-based distances agree well with predictions from a cylindrical model of DNA; ALEX-based distances fit better to theory than distances obtained at the ensemble level. Distance measurements within transcription complexes agreed well with ensemble-FRET measurements, and with structural models based on ensemble-FRET and x-ray crystallography. ALEX can benefit structural analysis of biomolecules, especially when such molecules are inaccessible to conventional structural methods due to heterogeneity or transient nature. PMID:15653725
NASA Astrophysics Data System (ADS)
Nayak, Kapileswar; Das, Sushanta; Nanavati, Hemant
2008-01-01
We present a framework for the development of elasticity and photoelasticity relationships for polyethylene terephthalate fiber networks, incorporating aspects of the primary molecular structure. Semicrystalline polymeric fiber networks are modeled as sequentially arranged crystalline and amorphous regions. Rotational isomeric states-Monte Carlo simulations of amorphous chains of up to 360 bonds (degree of polymerization, DP =60), confined between and bridging infinite impenetrable crystalline walls, have been characterized by Ω, the probability density of the intercrystal separation h, and Δβ, the polarizability anisotropy. lnΩ and Δβ have been modeled as functions of h, yielding the chain deformation relationships. The development has been extended to the fiber network to yield the photoelasticity relationships. We execute our framework by fitting to experimental stress-elongation data and employing the single fitted parameter to directly predict the birefringence-elongation behavior, without any further fitting. Incorporating the effect of strain-induced crystallization into the framework makes it physically more meaningful and yields accurate predictions of the birefringence-elongation behavior.
On the distance of genetic relationships and the accuracy of genomic prediction in pig breeding.
Meuwissen, Theo H E; Odegard, Jorgen; Andersen-Ranberg, Ina; Grindflek, Eli
2014-08-01
With the advent of genomic selection, alternative relationship matrices are used in animal breeding, which vary in their coverage of distant relationships due to old common ancestors. Relationships based on pedigree (A) and linkage analysis (GLA) cover only recent relationships because of the limited depth of the known pedigree. Relationships based on identity-by-state (G) include relationships up to the age of the SNP (single nucleotide polymorphism) mutations. We hypothesised that the latter relationships were too old, since QTL (quantitative trait locus) mutations for traits under selection were probably more recent than the SNPs on a chip, which are typically selected for high minor allele frequency. In addition, A and GLA relationships are too recent to cover genetic differences accurately. Thus, we devised a relationship matrix that considered intermediate-aged relationships and compared all these relationship matrices for their accuracy of genomic prediction in a pig breeding situation. Haplotypes were constructed and used to build a haplotype-based relationship matrix (GH), which considers more intermediate-aged relationships, since haplotypes recombine more quickly than SNPs mutate. Dense genotypes (38 453 SNPs) on 3250 elite breeding pigs were combined with phenotypes for growth rate (2668 records), lean meat percentage (2618), weight at three weeks of age (7387) and number of teats (5851) to estimate breeding values for all animals in the pedigree (8187 animals) using the aforementioned relationship matrices. Phenotypes on the youngest 424 to 486 animals were masked and predicted in order to assess the accuracy of the alternative genomic predictions. Correlations between the relationships and regressions of older on younger relationships revealed that the age of the relationships increased in the order A, GLA, GH and G. Use of genomic relationship matrices yielded significantly higher prediction accuracies than A. GH and G, differed not significantly, but were significantly more accurate than GLA. Our hypothesis that intermediate-aged relationships yield more accurate genomic predictions than G was confirmed for two of four traits, but these results were not statistically significant. Use of estimated genotype probabilities for ungenotyped animals proved to be an efficient method to include the phenotypes of ungenotyped animals.
Development of estrogen receptor beta binding prediction model using large sets of chemicals.
Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao
2017-11-03
We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .
2013-01-01
Background A major hindrance to the development of high yielding biofuel feedstocks is the ability to rapidly assess large populations for fermentable sugar yields. Whilst recent advances have outlined methods for the rapid assessment of biomass saccharification efficiency, none take into account the total biomass, or the soluble sugar fraction of the plant. Here we present a holistic high-throughput methodology for assessing sweet Sorghum bicolor feedstocks at 10 days post-anthesis for total fermentable sugar yields including stalk biomass, soluble sugar concentrations, and cell wall saccharification efficiency. Results A mathematical method for assessing whole S. bicolor stalks using the fourth internode from the base of the plant proved to be an effective high-throughput strategy for assessing stalk biomass, soluble sugar concentrations, and cell wall composition and allowed calculation of total stalk fermentable sugars. A high-throughput method for measuring soluble sucrose, glucose, and fructose using partial least squares (PLS) modelling of juice Fourier transform infrared (FTIR) spectra was developed. The PLS prediction was shown to be highly accurate with each sugar attaining a coefficient of determination (R 2 ) of 0.99 with a root mean squared error of prediction (RMSEP) of 11.93, 5.52, and 3.23 mM for sucrose, glucose, and fructose, respectively, which constitutes an error of <4% in each case. The sugar PLS model correlated well with gas chromatography–mass spectrometry (GC-MS) and brix measures. Similarly, a high-throughput method for predicting enzymatic cell wall digestibility using PLS modelling of FTIR spectra obtained from S. bicolor bagasse was developed. The PLS prediction was shown to be accurate with an R 2 of 0.94 and RMSEP of 0.64 μg.mgDW-1.h-1. Conclusions This methodology has been demonstrated as an efficient and effective way to screen large biofuel feedstock populations for biomass, soluble sugar concentrations, and cell wall digestibility simultaneously allowing a total fermentable yield calculation. It unifies and simplifies previous screening methodologies to produce a holistic assessment of biofuel feedstock potential. PMID:24365407
Earthquake prediction; new studies yield promising results
Robinson, R.
1974-01-01
On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquake prediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road.
Ágreda, Teresa; Águeda, Beatriz; Olano, José M; Vicente-Serrano, Sergio M; Fernández-Toirán, Marina
2015-09-01
Wild fungi play a critical role in forest ecosystems, and its recollection is a relevant economic activity. Understanding fungal response to climate is necessary in order to predict future fungal production in Mediterranean forests under climate change scenarios. We used a 15-year data set to model the relationship between climate and epigeous fungal abundance and productivity, for mycorrhizal and saprotrophic guilds in a Mediterranean pine forest. The obtained models were used to predict fungal productivity for the 2021-2080 period by means of regional climate change models. Simple models based on early spring temperature and summer-autumn rainfall could provide accurate estimates for fungal abundance and productivity. Models including rainfall and climatic water balance showed similar results and explanatory power for the analyzed 15-year period. However, their predictions for the 2021-2080 period diverged. Rainfall-based models predicted a maintenance of fungal yield, whereas water balance-based models predicted a steady decrease of fungal productivity under a global warming scenario. Under Mediterranean conditions fungi responded to weather conditions in two distinct periods: early spring and late summer-autumn, suggesting a bimodal pattern of growth. Saprotrophic and mycorrhizal fungi showed differences in the climatic control. Increased atmospheric evaporative demand due to global warming might lead to a drop in fungal yields during the 21st century. © 2015 John Wiley & Sons Ltd.
Cunha, B C N; Belk, K E; Scanga, J A; LeValley, S B; Tatum, J D; Smith, G C
2004-07-01
This study was performed to validate previous equations and to develop and evaluate new regression equations for predicting lamb carcass fabrication yields using outputs from a lamb vision system-hot carcass component (LVS-HCC) and the lamb vision system-chilled carcass LM imaging component (LVS-CCC). Lamb carcasses (n = 149) were selected after slaughter, imaged hot using the LVS-HCC, and chilled for 24 to 48 h at -3 to 1 degrees C. Chilled carcasses yield grades (YG) were assigned on-line by USDA graders and by expert USDA grading supervisors with unlimited time and access to the carcasses. Before fabrication, carcasses were ribbed between the 12th and 13th ribs and imaged using the LVS-CCC. Carcasses were fabricated into bone-in subprimal/primal cuts. Yields calculated included 1) saleable meat yield (SMY); 2) subprimal yield (SPY); and 3) fat yield (FY). On-line (whole-number) USDA YG accounted for 59, 58, and 64%; expert (whole-number) USDA YG explained 59, 59, and 65%; and expert (nearest-tenth) USDA YG accounted for 60, 60, and 67% of the observed variation in SMY, SPY, and FY, respectively. The best prediction equation developed in this trial using LVS-HCC output and hot carcass weight as independent variables explained 68, 62, and 74% of the variation in SMY, SPY, and FY, respectively. Addition of output from LVS-CCC improved predictive accuracy of the equations; the combined output equations explained 72 and 66% of the variability in SMY and SPY, respectively. Accuracy and repeatability of measurement of LM area made with the LVS-CCC also was assessed, and results suggested that use of LVS-CCC provided reasonably accurate (R2 = 0.59) and highly repeatable (repeatability = 0.98) measurements of LM area. Compared with USDA YG, use of the dual-component lamb vision system to predict cut yields of lamb carcasses improved accuracy and precision, suggesting that this system could have an application as an objective means for pricing carcasses in a value-based marketing system.
Prediction of microalgae hydrothermal liquefaction products from feedstock biochemical composition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leow, Shijie; Witter, John R.; Vardon, Derek R.
Hydrothermal liquefaction (HTL) uses water under elevated temperatures and pressures (200–350 °C, 5–20 MPa) to convert biomass into liquid “biocrude” oil. Despite extensive reports on factors influencing microalgae cell composition during cultivation and separate reports on HTL products linked to cell composition, the field still lacks a quantitative model to predict HTL conversion product yield and qualities from feedstock biochemical composition; the tailoring of microalgae feedstock for downstream conversion is a unique and critical aspect of microalgae biofuels that must be leveraged upon for optimization of the whole process. This study developed predictive relationships for HTL biocrude yield and othermore » conversion product characteristics based on HTL of Nannochloropsis oculata batches harvested with a wide range of compositions (23–59% dw lipids, 58–17% dw proteins, 12–22% dw carbohydrates) and a defatted batch (0% dw lipids, 75% dw proteins, 19% dw carbohydrates). HTL biocrude yield (33–68% dw) and carbon distribution (49–83%) increased in proportion to the fatty acid (FA) content. A component additivity model (predicting biocrude yield from lipid, protein, and carbohydrates) was more accurate predicting literature yields for diverse microalgae species than previous additivity models derived from model compounds. FA profiling of the biocrude product showed strong links to the initial feedstock FA profile of the lipid component, demonstrating that HTL acts as a water-based extraction process for FAs; the remainder non-FA structural components could be represented using the defatted batch. These findings were used to introduce a new FA-based model that predicts biocrude oil yields along with other critical parameters, and is capable of adjusting for the wide variations in HTL methodology and microalgae species through the defatted batch. Lastly, the FA model was linked to an upstream cultivation model (Phototrophic Process Model), providing for the first time an integrated modeling framework to overcome a critical barrier to microalgae-derived HTL biofuels and enable predictive analysis of the overall microalgal-to-biofuel process.« less
Prediction of microalgae hydrothermal liquefaction products from feedstock biochemical composition
Leow, Shijie; Witter, John R.; Vardon, Derek R.; ...
2015-05-11
Hydrothermal liquefaction (HTL) uses water under elevated temperatures and pressures (200–350 °C, 5–20 MPa) to convert biomass into liquid “biocrude” oil. Despite extensive reports on factors influencing microalgae cell composition during cultivation and separate reports on HTL products linked to cell composition, the field still lacks a quantitative model to predict HTL conversion product yield and qualities from feedstock biochemical composition; the tailoring of microalgae feedstock for downstream conversion is a unique and critical aspect of microalgae biofuels that must be leveraged upon for optimization of the whole process. This study developed predictive relationships for HTL biocrude yield and othermore » conversion product characteristics based on HTL of Nannochloropsis oculata batches harvested with a wide range of compositions (23–59% dw lipids, 58–17% dw proteins, 12–22% dw carbohydrates) and a defatted batch (0% dw lipids, 75% dw proteins, 19% dw carbohydrates). HTL biocrude yield (33–68% dw) and carbon distribution (49–83%) increased in proportion to the fatty acid (FA) content. A component additivity model (predicting biocrude yield from lipid, protein, and carbohydrates) was more accurate predicting literature yields for diverse microalgae species than previous additivity models derived from model compounds. FA profiling of the biocrude product showed strong links to the initial feedstock FA profile of the lipid component, demonstrating that HTL acts as a water-based extraction process for FAs; the remainder non-FA structural components could be represented using the defatted batch. These findings were used to introduce a new FA-based model that predicts biocrude oil yields along with other critical parameters, and is capable of adjusting for the wide variations in HTL methodology and microalgae species through the defatted batch. Lastly, the FA model was linked to an upstream cultivation model (Phototrophic Process Model), providing for the first time an integrated modeling framework to overcome a critical barrier to microalgae-derived HTL biofuels and enable predictive analysis of the overall microalgal-to-biofuel process.« less
Predicting elastic properties of β-HMX from first-principles calculations.
Peng, Qing; Rahul; Wang, Guangyu; Liu, Gui-Rong; Grimme, Stefan; De, Suvranu
2015-05-07
We investigate the performance of van der Waals (vdW) functions in predicting the elastic constants of β cyclotetramethylene tetranitramine (HMX) energetic molecular crystals using density functional theory (DFT) calculations. We confirm that the accuracy of the elastic constants is significantly improved using the vdW corrections with environment-dependent C6 together with PBE and revised PBE exchange-correlation functionals. The elastic constants obtained using PBE-D3(0) calculations yield the most accurate mechanical response of β-HMX when compared with experimental stress-strain data. Our results suggest that PBE-D3 calculations are reliable in predicting the elastic constants of this material.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; ...
2016-02-11
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less
Modeling evaporation from spent nuclear fuel storage pools: A diffusion approach
NASA Astrophysics Data System (ADS)
Hugo, Bruce Robert
Accurate prediction of evaporative losses from light water reactor nuclear power plant (NPP) spent fuel storage pools (SFPs) is important for activities ranging from sizing of water makeup systems during NPP design to predicting the time available to supply emergency makeup water following severe accidents. Existing correlations for predicting evaporation from water surfaces are only optimized for conditions typical of swimming pools. This new approach modeling evaporation as a diffusion process has yielded an evaporation rate model that provided a better fit of published high temperature evaporation data and measurements from two SFPs than other published evaporation correlations. Insights from treating evaporation as a diffusion process include correcting for the effects of air flow and solutes on evaporation rate. An accurate modeling of the effects of air flow on evaporation rate is required to explain the observed temperature data from the Fukushima Daiichi Unit 4 SFP during the 2011 loss of cooling event; the diffusion model of evaporation provides a significantly better fit to this data than existing evaporation models.
Prediction of apparent extinction for optical transmission through rain
NASA Astrophysics Data System (ADS)
Vasseur, H.; Gibbins, C. J.
1996-12-01
At optical wavelengths, geometrical optics holds that the extinction efficiency of raindrops is equal to two. This approximation yields a wavelength-independent extinction coefficient that, however, can hardly be used to predict accurately rain extinction measured in optical transmissions. Actually, in addition to the extinct direct incoming light, a significant part of the power scattered by the rain particles reaches the receiver. This leads to a reduced apparent extinction that depends on both rain characteristics and link parameters. A simple method is proposed to evaluate this apparent extinction. It accounts for the additional scattered power that enters the receiver when one considers the forward-scattering pattern of the raindrops as well as the multiple-scattering effects using, respectively, the Fraunhofer diffraction and Twersky theory. It results in a direct analytical formula that enables a quick and accurate estimation of the rain apparent extinction and highlights the influence of the link parameters. Predictions of apparent extinction through rain are found in excellent agreement with measurements in the visible and IR regions.
NASA Technical Reports Server (NTRS)
Price, Kevin P.; Nellis, M. Duane
1996-01-01
The purpose of this project was to develop a practical protocol that employs multitemporal remotely sensed imagery, integrated with environmental parameters to model and monitor agricultural and natural resources in the High Plains Region of the United States. The value of this project would be extended throughout the region via workshops targeted at carefully selected audiences and designed to transfer remote sensing technology and the methods and applications developed. Implementation of such a protocol using remotely sensed satellite imagery is critical for addressing many issues of regional importance, including: (1) Prediction of rural land use/land cover (LULC) categories within a region; (2) Use of rural LULC maps for successive years to monitor change; (3) Crop types derived from LULC maps as important inputs to water consumption models; (4) Early prediction of crop yields; (5) Multi-date maps of crop types to monitor patterns related to crop change; (6) Knowledge of crop types to monitor condition and improve prediction of crop yield; (7) More precise models of crop types and conditions to improve agricultural economic forecasts; (8;) Prediction of biomass for estimating vegetation production, soil protection from erosion forces, nonpoint source pollution, wildlife habitat quality and other related factors; (9) Crop type and condition information to more accurately predict production of biogeochemicals such as CO2, CH4, and other greenhouse gases that are inputs to global climate models; (10) Provide information regarding limiting factors (i.e., economic constraints of pumping, fertilizing, etc.) used in conjunction with other factors, such as changes in climate for predicting changes in rural LULC; (11) Accurate prediction of rural LULC used to assess the effectiveness of government programs such as the U.S. Soil Conservation Service (SCS) Conservation Reserve Program; and (12) Prediction of water demand based on rural LULC that can be related to rates of draw-down of underground water supplies.
Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Suhwan; Kim, Min-Cheol; Sim, Eunji
2017-05-01
All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex
Lombardo, Franco; Berellini, Giuliano; Labonte, Laura R; Liang, Guiqing; Kim, Sean
2016-03-01
We present a systematic evaluation of the Wajima superpositioning method to estimate the human intravenous (i.v.) pharmacokinetic (PK) profile based on a set of 54 marketed drugs with diverse structure and range of physicochemical properties. We illustrate the use of average of "best methods" for the prediction of clearance (CL) and volume of distribution at steady state (VDss) as described in our earlier work (Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):178-191; Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):167-177). These methods provided much more accurate prediction of human PK parameters, yielding 88% and 70% of the prediction within 2-fold error for VDss and CL, respectively. The prediction of human i.v. profile using Wajima superpositioning of rat, dog, and monkey time-concentration profiles was tested against the observed human i.v. PK using fold error statistics. The results showed that 63% of the compounds yielded a geometric mean of fold error below 2-fold, and an additional 19% yielded a geometric mean of fold error between 2- and 3-fold, leaving only 18% of the compounds with a relatively poor prediction. Our results showed that good superposition was observed in any case, demonstrating the predictive value of the Wajima approach, and that the cause of poor prediction of human i.v. profile was mainly due to the poorly predicted CL value, while VDss prediction had a minor impact on the accuracy of human i.v. profile prediction. Copyright © 2016. Published by Elsevier Inc.
Gradient Augmented Level Set Method for Two Phase Flow Simulations with Phase Change
NASA Astrophysics Data System (ADS)
Anumolu, C. R. Lakshman; Trujillo, Mario F.
2016-11-01
A sharp interface capturing approach is presented for two-phase flow simulations with phase change. The Gradient Augmented Levelset method is coupled with the two-phase momentum and energy equations to advect the liquid-gas interface and predict heat transfer with phase change. The Ghost Fluid Method (GFM) is adopted for velocity to discretize the advection and diffusion terms in the interfacial region. Furthermore, the GFM is employed to treat the discontinuity in the stress tensor, velocity, and temperature gradient yielding an accurate treatment in handling jump conditions. Thermal convection and diffusion terms are approximated by explicitly identifying the interface location, resulting in a sharp treatment for the energy solution. This sharp treatment is extended to estimate the interfacial mass transfer rate. At the computational cell, a d-cubic Hermite interpolating polynomial is employed to describe the interface location, which is locally fourth-order accurate. This extent of subgrid level description provides an accurate methodology for treating various interfacial processes with a high degree of sharpness. The ability to predict the interface and temperature evolutions accurately is illustrated by comparing numerical results with existing 1D to 3D analytical solutions.
(18)F-FDG uptake predicts diagnostic yield of transbronchial biopsy in peripheral lung cancer.
Umeda, Yukihiro; Demura, Yoshiki; Anzai, Masaki; Matsuoka, Hiroki; Araya, Tomoyuki; Nishitsuji, Masaru; Nishi, Koichi; Tsuchida, Tatsuro; Sumida, Yasuyuki; Morikawa, Miwa; Ameshima, Shingo; Ishizaki, Takeshi; Kasahara, Kazuo; Ishizuka, Tamotsu
2014-07-01
Recent advances in endobronchial ultrasonography with a guide sheath (EBUS-GS) have enabled better visualization of distal airways, while virtual bronchoscopic navigation (VBN) has been shown useful as a guide to navigate the bronchoscope. However, indications for utilizing VBN and EBUS-GS are not always clear. To clarify indications for a bronchoscopic examination using VBN and EBUS-GS, we evaluated factors that predict the diagnostic yield of a transbronchial biopsy (TBB) procedure for peripheral lung cancer (PLC) lesions. We retrospectively reviewed the charts of 194 patients with 201 PLC lesions (≤3cm mean diameter), and analyzed the association of diagnostic yield of TBB with [(18)F]-fluoro-2-deoxy-d-glucose ((18)F-FDG) positron emission tomography and chest computed tomography (CT) findings. The diagnostic yield of TBB using VBN and EBUS-GS was 66.7%. High maximum standardized uptake value (SUVmax), positive bronchus sign, and ground-glass opacity component shown on CT were all significant predictors of diagnostic yield, while multivariate analysis showed only high (18)F-FDG uptake (SUVmax ≥2.8) and positive bronchus sign as significant predictors. Diagnostic yield was higher for PLC lesions with high (18)F-FDG uptake (SUVmax ≥2.8) and positive bronchus sign (84.6%) than for those with SUVmax <2.8 and negative bronchus sign (33.3%). High (18)F-FDG uptake was also correlated with tumor invasiveness. High (18)F-FDG uptake predicted the diagnostic yield of TBB using VBN and EBUS-GS for PLC lesions. (18)F-FDG uptake and bronchus sign may indicate for the accurate application of bronchoscopy with those modalities for diagnosing PLC. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chandran, S; Parker, F; Lontos, S; Vaughan, R; Efthymiou, M
2015-12-01
Polyps identified at colonoscopy are predominantly diminutive (<5 mm) with a small risk (>1%) of high-grade dysplasia or carcinoma; however, the cost of histological assessment is substantial. The aim of this study was to determine whether prediction of colonoscopy surveillance intervals based on real-time endoscopic assessment of polyp histology is accurate and cost effective. A prospective cohort study was conducted across a tertiary care and private community hospital. Ninety-four patients underwent colonoscopy and polypectomy of diminutive (≤5 mm) polyps from October 2012 to July 2013, yielding a total of 159 polyps. Polyps were examined and classified according to the Sano-Emura classification system. The endoscopic assessment (optical diagnosis) of polyp histology was used to predict appropriate colonoscopy surveillance intervals. The main outcome measure was the accuracy of optical diagnosis of diminutive colonic polyps against the gold standard of histological assessment. Optical diagnosis was correct in 105/108 (97.2%) adenomas. This yielded a sensitivity, specificity and positive and negative predictive values (with 95%CI) of 97.2% (92.1-99.4%), 78.4% (64.7-88.7%), 90.5% (83.7-95.2%) and 93% (80.9-98.5%) respectively. Ninety-two (98%) patients were correctly triaged to their repeat surveillance colonoscopy. Based on these findings, a cut and discard approach would have resulted in a saving of $319.77 per patient. Endoscopists within a tertiary care setting can accurately predict diminutive polyp histology and confer an appropriate surveillance interval with an associated financial benefit to the healthcare system. However, limitations to its application in the community setting exist, which may improve with further training and high-definition colonoscopes. © 2015 Royal Australasian College of Physicians.
Ran, Tao; Liu, Yong; Li, Hengzhi; Tang, Shaoxun; He, Zhixiong; Munteanu, Cristian R; González-Díaz, Humberto; Tan, Zhiliang; Zhou, Chuanshe
2016-07-27
The management of ruminant growth yield has economic importance. The current work presents a study of the spatiotemporal dynamic expression of Ghrelin and GHR at mRNA levels throughout the gastrointestinal tract (GIT) of kid goats under housing and grazing systems. The experiments show that the feeding system and age affected the expression of either Ghrelin or GHR with different mechanisms. Furthermore, the experimental data are used to build new Machine Learning models based on the Perturbation Theory, which can predict the effects of perturbations of Ghrelin and GHR mRNA expression on the growth yield. The models consider eight longitudinal GIT segments (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum), seven time points (0, 7, 14, 28, 42, 56 and 70 d) and two feeding systems (Supplemental and Grazing feeding) as perturbations from the expected values of the growth yield. The best regression model was obtained using Random Forest, with the coefficient of determination R(2) of 0.781 for the test subset. The current results indicate that the non-linear regression model can accurately predict the growth yield and the key nodes during gastrointestinal development, which is helpful to optimize the feeding management strategies in ruminant production system.
Ran, Tao; Liu, Yong; Li, Hengzhi; Tang, Shaoxun; He, Zhixiong; Munteanu, Cristian R.; González-Díaz, Humberto; Tan, Zhiliang; Zhou, Chuanshe
2016-01-01
The management of ruminant growth yield has economic importance. The current work presents a study of the spatiotemporal dynamic expression of Ghrelin and GHR at mRNA levels throughout the gastrointestinal tract (GIT) of kid goats under housing and grazing systems. The experiments show that the feeding system and age affected the expression of either Ghrelin or GHR with different mechanisms. Furthermore, the experimental data are used to build new Machine Learning models based on the Perturbation Theory, which can predict the effects of perturbations of Ghrelin and GHR mRNA expression on the growth yield. The models consider eight longitudinal GIT segments (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum), seven time points (0, 7, 14, 28, 42, 56 and 70 d) and two feeding systems (Supplemental and Grazing feeding) as perturbations from the expected values of the growth yield. The best regression model was obtained using Random Forest, with the coefficient of determination R2 of 0.781 for the test subset. The current results indicate that the non-linear regression model can accurately predict the growth yield and the key nodes during gastrointestinal development, which is helpful to optimize the feeding management strategies in ruminant production system. PMID:27460882
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nozirov, Farhod, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com; Stachów, Michał, E-mail: michal.stachow@gmail.com; Kupka, Teobald, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com
2014-04-14
A theoretical prediction of nuclear magnetic shieldings and indirect spin-spin coupling constants in 1,1-, cis- and trans-1,2-difluoroethylenes is reported. The results obtained using density functional theory (DFT) combined with large basis sets and gauge-independent atomic orbital calculations were critically compared with experiment and conventional, higher level correlated electronic structure methods. Accurate structural, vibrational, and NMR parameters of difluoroethylenes were obtained using several density functionals combined with dedicated basis sets. B3LYP/6-311++G(3df,2pd) optimized structures of difluoroethylenes closely reproduced experimental geometries and earlier reported benchmark coupled cluster results, while BLYP/6-311++G(3df,2pd) produced accurate harmonic vibrational frequencies. The most accurate vibrations were obtained using B3LYP/6-311++G(3df,2pd)more » with correction for anharmonicity. Becke half and half (BHandH) density functional predicted more accurate {sup 19}F isotropic shieldings and van Voorhis and Scuseria's τ-dependent gradient-corrected correlation functional yielded better carbon shieldings than B3LYP. A surprisingly good performance of Hartree-Fock (HF) method in predicting nuclear shieldings in these molecules was observed. Inclusion of zero-point vibrational correction markedly improved agreement with experiment for nuclear shieldings calculated by HF, MP2, CCSD, and CCSD(T) methods but worsened the DFT results. The threefold improvement in accuracy when predicting {sup 2}J(FF) in 1,1-difluoroethylene for BHandH density functional compared to B3LYP was observed (the deviations from experiment were −46 vs. −115 Hz)« less
NASA Astrophysics Data System (ADS)
Yunardi, Y.; Munawar, Edi; Rinaldi, Wahyu; Razali, Asbar; Iskandar, Elwina; Fairweather, M.
2018-02-01
Soot prediction in a combustion system has become a subject of attention, as many factors influence its accuracy. An accurate temperature prediction will likely yield better soot predictions, since the inception, growth and destruction of the soot are affected by the temperature. This paper reported the study on the influences of turbulence closure and surface growth models on the prediction of soot levels in turbulent flames. The results demonstrated that a substantial distinction was observed in terms of temperature predictions derived using the k-ɛ and the Reynolds stress models, for the two ethylene flames studied here amongst the four types of surface growth rate model investigated, the assumption of the soot surface growth rate proportional to the particle number density, but independent on the surface area of soot particles, f ( A s ) = ρ N s , yields in closest agreement with the radial data. Without any adjustment to the constants in the surface growth term, other approaches where the surface growth directly proportional to the surface area and square root of surface area, f ( A s ) = A s and f ( A s ) = √ A s , result in an under- prediction of soot volume fraction. These results suggest that predictions of soot volume fraction are sensitive to the modelling of surface growth.
Climate driven crop planting date in the ACME Land Model (ALM): Impacts on productivity and yield
NASA Astrophysics Data System (ADS)
Drewniak, B.
2017-12-01
Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical crops will be explored. Those impacts include discussions on productivity, yield, and influences on carbon and energy fluxes.
Liu, Xiaojun; Ferguson, Richard B.; Zheng, Hengbiao; Cao, Qiang; Tian, Yongchao; Cao, Weixing; Zhu, Yan
2017-01-01
The successful development of an optimal canopy vegetation index dynamic model for obtaining higher yield can offer a technical approach for real-time and nondestructive diagnosis of rice (Oryza sativa L) growth and nitrogen (N) nutrition status. In this study, multiple rice cultivars and N treatments of experimental plots were carried out to obtain: normalized difference vegetation index (NDVI), leaf area index (LAI), above-ground dry matter (DM), and grain yield (GY) data. The quantitative relationships between NDVI and these growth indices (e.g., LAI, DM and GY) were analyzed, showing positive correlations. Using the normalized modeling method, an appropriate NDVI simulation model of rice was established based on the normalized NDVI (RNDVI) and relative accumulative growing degree days (RAGDD). The NDVI dynamic model for high-yield production in rice can be expressed by a double logistic model: RNDVI=(1+e−15.2829×(RAGDDi−0.1944))−1−(1+e−11.6517×(RAGDDi−1.0267))−1 (R2 = 0.8577**), which can be used to accurately predict canopy NDVI dynamic changes during the entire growth period. Considering variation among rice cultivars, we constructed two relative NDVI (RNDVI) dynamic models for Japonica and Indica rice types, with R2 reaching 0.8764** and 0.8874**, respectively. Furthermore, independent experimental data were used to validate the RNDVI dynamic models. The results showed that during the entire growth period, the accuracy (k), precision (R2), and standard deviation of RNDVI dynamic models for the Japonica and Indica cultivars were 0.9991, 1.0170; 0.9084**, 0.8030**; and 0.0232, 0.0170, respectively. These results indicated that RNDVI dynamic models could accurately reflect crop growth and predict dynamic changes in high-yield crop populations, providing a rapid approach for monitoring rice growth status. PMID:28338637
Liu, Xiaojun; Ferguson, Richard B; Zheng, Hengbiao; Cao, Qiang; Tian, Yongchao; Cao, Weixing; Zhu, Yan
2017-03-24
The successful development of an optimal canopy vegetation index dynamic model for obtaining higher yield can offer a technical approach for real-time and nondestructive diagnosis of rice (Oryza sativa L) growth and nitrogen (N) nutrition status. In this study, multiple rice cultivars and N treatments of experimental plots were carried out to obtain: normalized difference vegetation index (NDVI), leaf area index (LAI), above-ground dry matter (DM), and grain yield (GY) data. The quantitative relationships between NDVI and these growth indices (e.g., LAI, DM and GY) were analyzed, showing positive correlations. Using the normalized modeling method, an appropriate NDVI simulation model of rice was established based on the normalized NDVI (RNDVI) and relative accumulative growing degree days (RAGDD). The NDVI dynamic model for high-yield production in rice can be expressed by a double logistic model: RNDVI = ( 1 + e - 15.2829 × ( R A G D D i - 0.1944 ) ) - 1 - ( 1 + e - 11.6517 × ( R A G D D i - 1.0267 ) ) - 1 (R2 = 0.8577**), which can be used to accurately predict canopy NDVI dynamic changes during the entire growth period. Considering variation among rice cultivars, we constructed two relative NDVI (RNDVI) dynamic models for Japonica and Indica rice types, with R2 reaching 0.8764** and 0.8874**, respectively. Furthermore, independent experimental data were used to validate the RNDVI dynamic models. The results showed that during the entire growth period, the accuracy (k), precision (R2), and standard deviation of RNDVI dynamic models for the Japonica and Indica cultivars were 0.9991, 1.0170; 0.9084**, 0.8030**; and 0.0232, 0.0170, respectively. These results indicated that RNDVI dynamic models could accurately reflect crop growth and predict dynamic changes in high-yield crop populations, providing a rapid approach for monitoring rice growth status.
New higher-order Godunov code for modelling performance of two-stage light gas guns
NASA Technical Reports Server (NTRS)
Bogdanoff, D. W.; Miller, R. J.
1995-01-01
A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.
NASA Astrophysics Data System (ADS)
Mashayekhi, Somayeh; Miles, Paul; Hussaini, M. Yousuff; Oates, William S.
2018-02-01
In this paper, fractional and non-fractional viscoelastic models for elastomeric materials are derived and analyzed in comparison to experimental results. The viscoelastic models are derived by expanding thermodynamic balance equations for both fractal and non-fractal media. The order of the fractional time derivative is shown to strongly affect the accuracy of the viscoelastic constitutive predictions. Model validation uses experimental data describing viscoelasticity of the dielectric elastomer Very High Bond (VHB) 4910. Since these materials are known for their broad applications in smart structures, it is important to characterize and accurately predict their behavior across a large range of time scales. Whereas integer order viscoelastic models can yield reasonable agreement with data, the model parameters often lack robustness in prediction at different deformation rates. Alternatively, fractional order models of viscoelasticity provide an alternative framework to more accurately quantify complex rate-dependent behavior. Prior research that has considered fractional order viscoelasticity lacks experimental validation and contains limited links between viscoelastic theory and fractional order derivatives. To address these issues, we use fractional order operators to experimentally validate fractional and non-fractional viscoelastic models in elastomeric solids using Bayesian uncertainty quantification. The fractional order model is found to be advantageous as predictions are significantly more accurate than integer order viscoelastic models for deformation rates spanning four orders of magnitude.
Design, Fabrication and Test of Composite Curved Frames for Helicopter Fuselage Structure
NASA Technical Reports Server (NTRS)
Lowry, D. W.; Krebs, N. E.; Dobyns, A. L.
1984-01-01
Aspects of curved beam effects and their importance in designing composite frame structures are discussed. The curved beam effect induces radial flange loadings which in turn causes flange curling. This curling increases the axial flange stresses and induces transverse bending. These effects are more important in composite structures due to their general inability to redistribute stresses by general yielding, such as in metal structures. A detailed finite element analysis was conducted and used in the design of composite curved frame specimens. Five specimens were statically tested and compared with predicted and test strains. The curved frame effects must be accurately accounted for to avoid premature fracture; finite element methods can accurately predict most of the stresses and no elastic relief from curved beam effects occurred in the composite frames tested. Finite element studies are presented for comparative curved beam effects on composite and metal frames.
Optimal Design of Experiments by Combining Coarse and Fine Measurements
NASA Astrophysics Data System (ADS)
Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.
2017-11-01
In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.
Ab initio interatomic potentials and the thermodynamic properties of fluids
NASA Astrophysics Data System (ADS)
Vlasiuk, Maryna; Sadus, Richard J.
2017-07-01
Monte Carlo simulations with accurate ab initio interatomic potentials are used to investigate the key thermodynamic properties of argon and krypton in both vapor and liquid phases. Data are reported for the isochoric and isobaric heat capacities, the Joule-Thomson coefficient, and the speed of sound calculated using various two-body interatomic potentials and different combinations of two-body plus three-body terms. The results are compared to either experimental or reference data at state points between the triple and critical points. Using accurate two-body ab initio potentials, combined with three-body interaction terms such as the Axilrod-Teller-Muto and Marcelli-Wang-Sadus potentials, yields systematic improvements to the accuracy of thermodynamic predictions. The effect of three-body interactions is to lower the isochoric and isobaric heat capacities and increase both the Joule-Thomson coefficient and speed of sound. The Marcelli-Wang-Sadus potential is a computationally inexpensive way to utilize accurate two-body ab initio potentials for the prediction of thermodynamic properties. In particular, it provides a very effective way of extending two-body ab initio potentials to liquid phase properties.
Back in the saddle: large-deviation statistics of the cosmic log-density field
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.
2016-08-01
We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.
Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Larsson, Johan
2013-01-01
A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.
Ab initio interatomic potentials and the thermodynamic properties of fluids.
Vlasiuk, Maryna; Sadus, Richard J
2017-07-14
Monte Carlo simulations with accurate ab initio interatomic potentials are used to investigate the key thermodynamic properties of argon and krypton in both vapor and liquid phases. Data are reported for the isochoric and isobaric heat capacities, the Joule-Thomson coefficient, and the speed of sound calculated using various two-body interatomic potentials and different combinations of two-body plus three-body terms. The results are compared to either experimental or reference data at state points between the triple and critical points. Using accurate two-body ab initio potentials, combined with three-body interaction terms such as the Axilrod-Teller-Muto and Marcelli-Wang-Sadus potentials, yields systematic improvements to the accuracy of thermodynamic predictions. The effect of three-body interactions is to lower the isochoric and isobaric heat capacities and increase both the Joule-Thomson coefficient and speed of sound. The Marcelli-Wang-Sadus potential is a computationally inexpensive way to utilize accurate two-body ab initio potentials for the prediction of thermodynamic properties. In particular, it provides a very effective way of extending two-body ab initio potentials to liquid phase properties.
Michael Gavazzi; Ge Sun; Steve McNulty; E.A Treasure; M.G Wightman
2016-01-01
The area of planted pine in the southern U.S. is predicted to increase by over 70% by 2060, potentially altering the natural hydrologic cycle and water balance at multiple scales. To better account for potential shifts in water yield, land managers and resource planners must accurately quantify water budgets from the stand to the regional scale. The amount of...
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Mahmood, Khalid; Jung, Chol-Hee; Philip, Gayle; Georgeson, Peter; Chung, Jessica; Pope, Bernard J; Park, Daniel J
2017-05-16
Genetic variant effect prediction algorithms are used extensively in clinical genomics and research to determine the likely consequences of amino acid substitutions on protein function. It is vital that we better understand their accuracies and limitations because published performance metrics are confounded by serious problems of circularity and error propagation. Here, we derive three independent, functionally determined human mutation datasets, UniFun, BRCA1-DMS and TP53-TA, and employ them, alongside previously described datasets, to assess the pre-eminent variant effect prediction tools. Apparent accuracies of variant effect prediction tools were influenced significantly by the benchmarking dataset. Benchmarking with the assay-determined datasets UniFun and BRCA1-DMS yielded areas under the receiver operating characteristic curves in the modest ranges of 0.52 to 0.63 and 0.54 to 0.75, respectively, considerably lower than observed for other, potentially more conflicted datasets. These results raise concerns about how such algorithms should be employed, particularly in a clinical setting. Contemporary variant effect prediction tools are unlikely to be as accurate at the general prediction of functional impacts on proteins as reported prior. Use of functional assay-based datasets that avoid prior dependencies promises to be valuable for the ongoing development and accurate benchmarking of such tools.
Testing and Analysis of NEXT Ion Engine Discharge Cathode Assembly Wear
NASA Technical Reports Server (NTRS)
Domonkos, Matthew T.; Foster, John E.; Soulas, George C.; Nakles, Michael
2003-01-01
Experimental and analytical investigations were conducted to predict the wear of the discharge cathode keeper in the NASA Evolutionary Xenon Thruster. The ion current to the keeper was found to be highly dependent upon the beam current, and the average beam current density was nearly identical to that of the NSTAR thruster for comparable beam current density. The ion current distribution was highly peaked toward the keeper orifice. A deterministic wear assessment predicted keeper orifice erosion to the same diameter as the cathode tube after processing 375 kg of xenon. A rough estimate of discharge cathode assembly life limit due to sputtering indicated that the current design exceeds the qualification goal of 405 kg. Probabilistic wear analysis showed that the plasma potential and the sputter yield contributed most to the uncertainty in the wear assessment. It was recommended that fundamental experimental and modeling efforts focus on accurately describing the plasma potential and the sputtering yield.
Diffusion kinetics of the glucose/glucose oxidase system in swift heavy ion track-based biosensors
NASA Astrophysics Data System (ADS)
Fink, Dietmar; Vacik, Jiri; Hnatowicz, V.; Muñoz Hernandez, G.; Garcia Arrelano, H.; Alfonta, Lital; Kiv, Arik
2017-05-01
For understanding of the diffusion kinetics and their optimization in swift heavy ion track-based biosensors, recently a diffusion simulation was performed. This simulation aimed at yielding the degree of enrichment of the enzymatic reaction products in the highly confined space of the etched ion tracks. A bunch of curves was obtained for the description of such sensors that depend only on the ratio of the diffusion coefficient of the products to that of the analyte within the tracks. As hitherto none of these two diffusion coefficients is accurately known, the present work was undertaken. The results of this paper allow one to quantify the previous simulation and hence yield realistic predictions of glucose-based biosensors. At this occasion, also the influence of the etched track radius on the diffusion coefficients was measured and compared with earlier prediction.
Soler, Miguel A; de Marco, Ario; Fortuna, Sara
2016-10-10
Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.
NASA Astrophysics Data System (ADS)
Soler, Miguel A.; De Marco, Ario; Fortuna, Sara
2016-10-01
Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.
van Engelen, S; Bovenhuis, H; Dijkstra, J; van Arendonk, J A M; Visker, M H P W
2015-11-01
Dairy cows produce enteric methane, a greenhouse gas with 25 times the global warming potential of CO2. Breeding could make a permanent, cumulative, and long-term contribution to methane reduction. Due to a lack of accurate, repeatable, individual methane measurements needed for breeding, indicators of methane production based on milk fatty acids have been proposed. The aim of the present study was to quantify the genetic variation for predicted methane yields. The milk fat composition of 1,905 first-lactation Dutch Holstein-Friesian cows was used to investigate 3 different predicted methane yields (g/kg of DMI): Methane1, Methane2, and Methane3. Methane1 was based on the milk fat proportions of C17:0anteiso, C18:1 rans-10+11, C18:1 cis-11, and C18:1 cis-13 (R(2)=0.73). Methane2 was based on C4:0, C18:0, C18:1 trans-10+11, and C18:1 cis-11 (R(2)=0.70). Methane3 was based on C4:0, C6:0, and C18:1 trans-10+11 (R(2)=0.63). Predicted methane yields were demonstrated to be heritable traits, with heritabilities between 0.12 and 0.44. Breeding can, thus, be used to decrease methane production predicted based on milk fatty acids. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yeom, J. M.; Kim, H. O.
2014-12-01
In this study, we estimated the rice paddy yield with moderate geostationary satellite based vegetation products and GRAMI model over South Korea. Rice is the most popular staple food for Asian people. In addition, the effects of climate change are getting stronger especially in Asian region, where the most of rice are cultivated. Therefore, accurate and timely prediction of rice yield is one of the most important to accomplish food security and to prepare natural disasters such as crop defoliation, drought, and pest infestation. In the present study, GOCI, which is world first Geostationary Ocean Color Image, was used for estimating temporal vegetation indices of the rice paddy by adopting atmospheric correction BRDF modeling. For the atmospheric correction with LUT method based on Second Simulation of the Satellite Signal in the Solar Spectrum (6S), MODIS atmospheric products such as MOD04, MOD05, MOD07 from NASA's Earth Observing System Data and Information System (EOSDIS) were used. In order to correct the surface anisotropy effect, Ross-Thick Li-Sparse Reciprocal (RTLSR) BRDF model was performed at daily basis with 16day composite period. The estimated multi-temporal vegetation images was used for crop classification by using high resolution satellite images such as Rapideye, KOMPSAT-2 and KOMPSAT-3 to extract the proportional rice paddy area in corresponding a pixel of GOCI. In the case of GRAMI crop model, initial conditions are determined by performing every 2 weeks field works at Chonnam National University, Gwangju, Korea. The corrected GOCI vegetation products were incorporated with GRAMI model to predict rice yield estimation. The predicted rice yield was compared with field measurement of rice yield.
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
High-speed engine/component performance assessment using exergy and thrust-based methods
NASA Technical Reports Server (NTRS)
Riggins, D. W.
1996-01-01
This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET
2017-06-01
This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.
Increased genomic prediction accuracy in wheat breeding using a large Australian panel.
Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn
2017-12-01
Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.
Modeling the relationships between quality and biochemical composition of fatty liver in mule ducks.
Theron, L; Cullere, M; Bouillier-Oudot, M; Manse, H; Dalle Zotte, A; Molette, C; Fernandez, X; Vitezica, Z G
2012-09-01
The fatty liver of mule ducks (i.e., French "foie gras") is the most valuable product in duck production systems. Its quality is measured by the technological yield, which is the opposite of the fat loss during cooking. The purpose of this study was to determine whether biochemical measures of fatty liver could be used to accurately predict the technological yield (TY). Ninety-one male mule ducks were bred, overfed, and slaughtered under commercial conditions. Fatty liver weight (FLW) and biochemical variables, such as DM, lipid (LIP), and protein content (PROT), were collected. To evaluate evidence for nonlinear fat loss during cooking, we compared regression models describing linear and nonlinear relations between biochemical measures and TY. We detected significantly greater (P = 0.02) linear relation between DM and TY. Our results indicate that LIP and PROT follow a different pattern (linear) than DM and showed that LIP and PROT are nonexclusive contributing factors to TY. Other components, such as carbohydrates, other than those measured in this study, could contribute to DM. Stepwise regression for TY was performed. The traditional model with FLW was tested. The results showed that the weight of the liver is of limited value in the determination of fat loss during cooking (R(2) = 0.14). The most accurate TY prediction equation included DM (in linear and quadratic terms), FLW, and PROT (R(2) = 0.43). Biochemical measures in the fatty liver were more accurate predictors of TY than FLW. The model is useful in commercial conditions because DM, PROT, and FLW are noninvasive measures.
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Ueda, Kazuhiro; Tanaka, Toshiki; Li, Tao-Sheng; Tanaka, Nobuyuki; Hamano, Kimikazu
2009-03-01
The prediction of pulmonary functional reserve is mandatory in therapeutic decision-making for patients with resectable lung cancer, especially those with underlying lung disease. Volumetric analysis in combination with densitometric analysis of the affected lung lobe or segment with quantitative computed tomography (CT) helps to identify residual pulmonary function, although the utility of this modality needs investigation. The subjects of this prospective study were 30 patients with resectable lung cancer. A three-dimensional CT lung model was created with voxels representing normal lung attenuation (-600 to -910 Hounsfield units). Residual pulmonary function was predicted by drawing a boundary line between the lung to be preserved and that to be resected, directly on the lung model. The predicted values were correlated with the postoperative measured values. The predicted and measured values corresponded well (r=0.89, p<0.001). Although the predicted values corresponded with values predicted by simple calculation using a segment-counting method (r=0.98), there were two outliers whose pulmonary functional reserves were predicted more accurately by CT than by segment counting. The measured pulmonary functional reserves were significantly higher than the predicted values in patients with extensive emphysematous areas (<-910 Hounsfield units), but not in patients with chronic obstructive pulmonary disease. Quantitative CT yielded accurate prediction of functional reserve after lung cancer surgery and helped to identify patients whose functional reserves are likely to be underestimated. Hence, this modality should be utilized for patients with marginal pulmonary function.
NASA Astrophysics Data System (ADS)
Alrasyid, Harun; Safi, Fahrudin; Iranata, Data; Chen-Ou, Yu
2017-11-01
This research shows the prediction of shear behavior of High-Strength Reinforced Concrete Columns using Finite-Element Method. The experimental data of nine half scale high-strength reinforced concrete were selected. These columns using specified concrete compressive strength of 70 MPa, specified yield strength of longitudinal and transverse reinforcement of 685 and 785 MPa, respectively. The VecTor2 finite element software was used to simulate the shear critical behavior of these columns. The combination axial compression load and monotonic loading were applied at this prediction. It is demonstrated that VecTor2 finite element software provides accurate prediction of load-deflection up to peak at applied load, but provide similar behavior at post peak load. The shear strength prediction provide by VecTor 2 are slightly conservative compare to test result.
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. © 2016 John Wiley & Sons Ltd.
Flared landing approach flying qualities. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Weingarten, Norman C.; Berthe, Charles J., Jr.; Rynaski, Edmund G.; Sarrafian, Shahan K.
1986-01-01
An in-flight research study was conducted utilizing the USAF/Total In-Flight Simulator (TIFS) to investigate longitudinal flying qualities for the flared landing approach phase of flight. A consistent set of data were generated for: determining what kind of command response the pilot prefers/requires in order to flare and land an aircraft with precision, and refining a time history criterion that took into account all the necessary variables and the characteristics that would accurately predict flying qualities. Seven evaluation pilots participated representing NASA Langley, NASA Dryden, Calspan, Boeing, Lockheed, and DFVLR (Braunschweig, Germany). The results of the first part of the study provide guidelines to the flight control system designer, using MIL-F-8785-(C) as a guide, that yield the dynamic behavior pilots prefer in flared landings. The results of the second part provide the flying qualities engineer with a derived flying qualities predictive tool which appears to be highly accurate. This time-domain predictive flying qualities criterion was applied to the flight data as well as six previous flying qualities studies, and the results indicate that the criterion predicted the flying qualities level 81% of the time and the Cooper-Harper pilot rating, within + or - 1%, 60% of the time.
Flared landing approach flying qualities. Volume 1: Experiment design and analysis
NASA Technical Reports Server (NTRS)
Weingarten, Norman C.; Berthe, Charles J., Jr.; Rynaski, Edmund G.; Sarrafian, Shahan K.
1986-01-01
An inflight research study was conducted utilizing the USAF Total Inflight Simulator (TIFS) to investigate longitudinal flying qualities for the flared landing approach phase of flight. The purpose of the experiment was to generate a consistent set of data for: (1) determining what kind of commanded response the pilot prefers in order to flare and land an airplane with precision, and (2) refining a time history criterion that took into account all the necessary variables and their characteristics that would accurately predict flying qualities. The result of the first part provides guidelines to the flight control system designer, using MIL-F-8785-(C) as a guide, that yield the dynamic behavior pilots perfer in flared landings. The results of the second part provides the flying qualities engineer with a newly derived flying qualities predictive tool which appears to be highly accurate. This time domain predictive flying qualities criterion was applied to the flight data as well as six previous flying qualities studies, and the results indicate that the criterion predicted the flying qualities level 81% of the time and the Cooper-Harper pilot rating, within + or - 1, 60% of the time.
Maharlou, Hamidreza; Niakan Kalhori, Sharareh R; Shahbazi, Shahrbanoo; Ravangard, Ramin
2018-04-01
Accurate prediction of patients' length of stay is highly important. This study compared the performance of artificial neural network and adaptive neuro-fuzzy system algorithms to predict patients' length of stay in intensive care units (ICU) after cardiac surgery. A cross-sectional, analytical, and applied study was conducted. The required data were collected from 311 cardiac patients admitted to intensive care units after surgery at three hospitals of Shiraz, Iran, through a non-random convenience sampling method during the second quarter of 2016. Following the initial processing of influential factors, models were created and evaluated. The results showed that the adaptive neuro-fuzzy algorithm (with mean squared error [MSE] = 7 and R = 0.88) resulted in the creation of a more precise model than the artificial neural network (with MSE = 21 and R = 0.60). The adaptive neuro-fuzzy algorithm produces a more accurate model as it applies both the capabilities of a neural network architecture and experts' knowledge as a hybrid algorithm. It identifies nonlinear components, yielding remarkable results for prediction the length of stay, which is a useful calculation output to support ICU management, enabling higher quality of administration and cost reduction.
Flight test evaluation of predicted light aircraft drag, performance, and stability
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Fox, S. R.
1979-01-01
A technique was developed which permits simultaneous extraction of complete lift, drag, and thrust power curves from time histories of a single aircraft maneuver such as a pullup (from V sub max to V sub stall) and pushover (to sub V max for level flight.) The technique is an extension to non-linear equations of motion of the parameter identification methods of lliff and Taylor and includes provisions for internal data compatibility improvement as well. The technique was show to be capable of correcting random errors in the most sensitive data channel and yielding highly accurate results. This technique was applied to flight data taken on the ATLIT aircraft. The drag and power values obtained from the initial least squares estimate are about 15% less than the 'true' values. If one takes into account the rather dirty wing and fuselage existing at the time of the tests, however, the predictions are reasonably accurate. The steady state lift measurements agree well with the extracted values only for small values of alpha. The predicted value of the lift at alpha = 0 is about 33% below that found in steady state tests while the predicted lift slope is 13% below the steady state value.
Development of a Physiologically-Based Pharmacokinetic Model of the Rat Central Nervous System
Badhan, Raj K. Singh; Chenel, Marylore; Penny, Jeffrey I.
2014-01-01
Central nervous system (CNS) drug disposition is dictated by a drug’s physicochemical properties and its ability to permeate physiological barriers. The blood–brain barrier (BBB), blood-cerebrospinal fluid barrier and centrally located drug transporter proteins influence drug disposition within the central nervous system. Attainment of adequate brain-to-plasma and cerebrospinal fluid-to-plasma partitioning is important in determining the efficacy of centrally acting therapeutics. We have developed a physiologically-based pharmacokinetic model of the rat CNS which incorporates brain interstitial fluid (ISF), choroidal epithelial and total cerebrospinal fluid (CSF) compartments and accurately predicts CNS pharmacokinetics. The model yielded reasonable predictions of unbound brain-to-plasma partition ratio (Kpuu,brain) and CSF:plasma ratio (CSF:Plasmau) using a series of in vitro permeability and unbound fraction parameters. When using in vitro permeability data obtained from L-mdr1a cells to estimate rat in vivo permeability, the model successfully predicted, to within 4-fold, Kpuu,brain and CSF:Plasmau for 81.5% of compounds simulated. The model presented allows for simultaneous simulation and analysis of both brain biophase and CSF to accurately predict CNS pharmacokinetics from preclinical drug parameters routinely available during discovery and development pathways. PMID:24647103
Regnier, D.; Dubray, N.; Schunck, N.; ...
2016-05-13
Here, accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r process to fuel cycle optimization for nuclear energy. The need for a predictive theory applicable where no data are available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics.
Hoffman, Haydn; Lee, Sunghoon I; Garst, Jordan H; Lu, Derek S; Li, Charles H; Nagasawa, Daniel T; Ghalehsari, Nima; Jahanforouz, Nima; Razaghy, Mehrdad; Espinal, Marie; Ghavamrezaii, Amir; Paak, Brian H; Wu, Irene; Sarrafzadeh, Majid; Lu, Daniel C
2015-09-01
This study introduces the use of multivariate linear regression (MLR) and support vector regression (SVR) models to predict postoperative outcomes in a cohort of patients who underwent surgery for cervical spondylotic myelopathy (CSM). Currently, predicting outcomes after surgery for CSM remains a challenge. We recruited patients who had a diagnosis of CSM and required decompressive surgery with or without fusion. Fine motor function was tested preoperatively and postoperatively with a handgrip-based tracking device that has been previously validated, yielding mean absolute accuracy (MAA) results for two tracking tasks (sinusoidal and step). All patients completed Oswestry disability index (ODI) and modified Japanese Orthopaedic Association questionnaires preoperatively and postoperatively. Preoperative data was utilized in MLR and SVR models to predict postoperative ODI. Predictions were compared to the actual ODI scores with the coefficient of determination (R(2)) and mean absolute difference (MAD). From this, 20 patients met the inclusion criteria and completed follow-up at least 3 months after surgery. With the MLR model, a combination of the preoperative ODI score, preoperative MAA (step function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.452; MAD=0.0887; p=1.17 × 10(-3)). With the SVR model, a combination of preoperative ODI score, preoperative MAA (sinusoidal function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.932; MAD=0.0283; p=5.73 × 10(-12)). The SVR model was more accurate than the MLR model. The SVR can be used preoperatively in risk/benefit analysis and the decision to operate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Physics-based enzyme design: predicting binding affinity and catalytic activity.
Sirin, Sarah; Pearlman, David A; Sherman, Woody
2014-12-01
Computational enzyme design is an emerging field that has yielded promising success stories, but where numerous challenges remain. Accurate methods to rapidly evaluate possible enzyme design variants could provide significant value when combined with experimental efforts by reducing the number of variants needed to be synthesized and speeding the time to reach the desired endpoint of the design. To that end, extending our computational methods to model the fundamental physical-chemical principles that regulate activity in a protocol that is automated and accessible to a broad population of enzyme design researchers is essential. Here, we apply a physics-based implicit solvent MM-GBSA scoring approach to enzyme design and benchmark the computational predictions against experimentally determined activities. Specifically, we evaluate the ability of MM-GBSA to predict changes in affinity for a steroid binder protein, catalytic turnover for a Kemp eliminase, and catalytic activity for α-Gliadin peptidase variants. Using the enzyme design framework developed here, we accurately rank the most experimentally active enzyme variants, suggesting that this approach could provide enrichment of active variants in real-world enzyme design applications. © 2014 Wiley Periodicals, Inc.
Govind Rajan, Ananth; Strano, Michael S; Blankschtein, Daniel
2018-04-05
Hexagonal boron nitride (hBN) is an up-and-coming two-dimensional material, with applications in electronic devices, tribology, and separation membranes. Herein, we utilize density-functional-theory-based ab initio molecular dynamics (MD) simulations and lattice dynamics calculations to develop a classical force field (FF) for modeling hBN. The FF predicts the crystal structure, elastic constants, and phonon dispersion relation of hBN with good accuracy and exhibits remarkable agreement with the interlayer binding energy predicted by random phase approximation calculations. We demonstrate the importance of including Coulombic interactions but excluding 1-4 intrasheet interactions to obtain the correct phonon dispersion relation. We find that improper dihedrals do not modify the bulk mechanical properties and the extent of thermal vibrations in hBN, although they impact its flexural rigidity. Combining the FF with the accurate TIP4P/Ice water model yields excellent agreement with interaction energies predicted by quantum Monte Carlo calculations. Our FF should enable an accurate description of hBN interfaces in classical MD simulations.
Complete fold annotation of the human proteome using a novel structural feature space.
Middleton, Sarah A; Illuminati, Joseph; Kim, Junhyong
2017-04-13
Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this method by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Sarah A.; Illuminati, Joseph; Kim, Junhyong
Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this methodmore » by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Finally, our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families.« less
Complete fold annotation of the human proteome using a novel structural feature space
Middleton, Sarah A.; Illuminati, Joseph; Kim, Junhyong
2017-01-01
Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this method by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families. PMID:28406174
Chain Ends and the Ultimate Tensile Strength of Polyethylene Fibers
NASA Astrophysics Data System (ADS)
O'Connor, Thomas C.; Robbins, Mark O.
Determining the tensile yield mechanisms of oriented polymer fibers remains a challenging problem in polymer mechanics. By maximizing the alignment and crystallinity of polyethylene (PE) fibers, tensile strengths σ ~ 6 - 7 GPa have been achieved. While impressive, first-principal calculations predict carbon backbone bonds would allow strengths four times higher (σ ~ 20 GPa) before breaking. The reduction in strength is caused by crystal defects like chain ends, which allow fibers to yield by chain slip in addition to bond breaking. We use large scale molecular dynamics (MD) simulations to determine the tensile yield mechanism of orthorhombic PE crystals with finite chains spanning 102 -104 carbons in length. The yield stress σy saturates for long chains at ~ 6 . 3 GPa, agreeing well with experiments. Chains do not break but always yield by slip, after nucleation of 1D dislocations at chain ends. Dislocations are accurately described by a Frenkel-Kontorova model, parametrized by the mechanical properties of an ideal crystal. We compute a dislocation core size ξ = 25 . 24 Å and determine the high and low strain rate limits of σy. Our results suggest characterizing such 1D dislocations is an efficient method for predicting fiber strength. This research was performed within the Center for Materials in Extreme Dynamic Environments (CMEDE) under the Hopkins Extreme Materials Institute at Johns Hopkins University. Financial support was provided by Grant W911NF-12-2-0022.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dholabhai, Pratik P., E-mail: pratik.dholabhai@asu.ed; Anwar, Shahriar, E-mail: anwar@asu.ed; Adams, James B., E-mail: jim.adams@asu.ed
Kinetic lattice Monte Carlo (KLMC) model is developed for investigating oxygen vacancy diffusion in praseodymium-doped ceria. The current approach uses a database of activation energies for oxygen vacancy migration, calculated using first-principles, for various migration pathways in praseodymium-doped ceria. Since the first-principles calculations revealed significant vacancy-vacancy repulsion, we investigate the importance of that effect by conducting simulations with and without a repulsive interaction. Initially, as dopant concentrations increase, vacancy concentration and thus conductivity increases. However, at higher concentrations, vacancies interfere and repel one another, and dopants trap vacancies, creating a 'traffic jam' that decreases conductivity, which is consistent with themore » experimental findings. The modeled effective activation energy for vacancy migration slightly increased with increasing dopant concentration in qualitative agreement with the experiment. The current methodology comprising a blend of first-principle calculations and KLMC model provides a very powerful fundamental tool for predicting the optimal dopant concentration in ceria related materials. -- graphical abstract: Ionic conductivity in praseodymium doped ceria as a function of dopant concentration calculated using the kinetic lattice Monte Carlo vacancy-repelling model, which predicts the optimal composition for achieving maximum conductivity. Display Omitted Research highlights: {yields} KLMC method calculates the accurate time-dependent diffusion of oxygen vacancies. {yields} KLMC-VR model predicts a dopant concentration of {approx}15-20% to be optimal in PDC. {yields} At higher dopant concentration, vacancies interfere and repel one another, and dopants trap vacancies. {yields} Activation energy for vacancy migration increases as a function of dopant content« less
Developing a stochastic traffic volume prediction model for public-private partnership projects
NASA Astrophysics Data System (ADS)
Phong, Nguyen Thanh; Likhitruangsilp, Veerasak; Onishi, Masamitsu
2017-11-01
Transportation projects require an enormous amount of capital investment resulting from their tremendous size, complexity, and risk. Due to the limitation of public finances, the private sector is invited to participate in transportation project development. The private sector can entirely or partially invest in transportation projects in the form of Public-Private Partnership (PPP) scheme, which has been an attractive option for several developing countries, including Vietnam. There are many factors affecting the success of PPP projects. The accurate prediction of traffic volume is considered one of the key success factors of PPP transportation projects. However, only few research works investigated how to predict traffic volume over a long period of time. Moreover, conventional traffic volume forecasting methods are usually based on deterministic models which predict a single value of traffic volume but do not consider risk and uncertainty. This knowledge gap makes it difficult for concessionaires to estimate PPP transportation project revenues accurately. The objective of this paper is to develop a probabilistic traffic volume prediction model. First, traffic volumes were estimated following the Geometric Brownian Motion (GBM) process. Monte Carlo technique is then applied to simulate different scenarios. The results show that this stochastic approach can systematically analyze variations in the traffic volume and yield more reliable estimates for PPP projects.
On the prediction of free turbulent jets with swirl using a quadratic pressure-strain model
NASA Technical Reports Server (NTRS)
Younis, Bassam A.; Gatski, Thomas B.; Speziale, Charles G.
1994-01-01
Data from free turbulent jets both with and without swirl are used to assess the performance of the pressure-strain model of Speziale, Sarkar and Gatski which is quadratic in the Reynolds stresses. Comparative predictions are also obtained with the two versions of the Launder, Reece and Rodi model which are linear in the same terms. All models are used as part of a complete second-order closure based on the solution of differential transport equations for each non-zero component of the Reynolds stress tensor together with an equation for the scalar energy dissipation rate. For non-swirling jets, the quadratic model underestimates the measured spreading rate of the plane jet but yields a better prediction for the axisymmetric case without resolving the plane jet/round jet anomaly. For the swirling axisymmetric jet, the same model accurately reproduces the effects of swirl on both the mean flow and the turbulence structure in sharp contrast with the linear models which yield results that are in serious error. The reasons for these differences are discussed.
Remote Estimation of Vegetation Fraction and Yield in Oilseed Rape with Unmanned Aerial Vehicle Data
NASA Astrophysics Data System (ADS)
Peng, Y.; Fang, S.; Liu, K.; Gong, Y.
2017-12-01
This study developed an approach for remote estimation of Vegetation Fraction (VF) and yield in oilseed rape, which is a crop species with conspicuous flowers during reproduction. Canopy reflectance in green, red, red edge and NIR bands was obtained by a camera system mounted on an unmanned aerial vehicle (UAV) when oilseed rape was in the vegetative growth and flowering stage. The relationship of several widely-used Vegetation Indices (VI) vs. VF was tested and found to be different in different phenology stages. At the same VF when oilseed rape was flowering, canopy reflectance increased in all bands, and the tested VI decreased. Therefore, two algorithms to estimate VF were calibrated respectively, one for samples during vegetative growth and the other for samples during flowering stage. During the flowering season, we also explored the potential of using canopy reflectance or VIs to estimate Flower Fraction (FF) in oilseed rape. Based on FF estimates, rape yield can be estimated using canopy reflectance data. Our model was validated in oilseed rape planted under different nitrogen fertilization applications and in different phenology stages. The results showed that it was able to predict VF and FF accurately in oilseed rape with estimation error below 6% and predict yield with estimation error below 20%.
Modeling Long-Term Corn Yield Response to Nitrogen Rate and Crop Rotation
Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Dietzel, Ranae; Poffenbarger, Hanna; Castellano, Michael J.; Moore, Kenneth J.; Thorburn, Peter; Archontoulis, Sotirios V.
2016-01-01
Improved prediction of optimal N fertilizer rates for corn (Zea mays L.) can reduce N losses and increase profits. We tested the ability of the Agricultural Production Systems sIMulator (APSIM) to simulate corn and soybean (Glycine max L.) yields, the economic optimum N rate (EONR) using a 16-year field-experiment dataset from central Iowa, USA that included two crop sequences (continuous corn and soybean-corn) and five N fertilizer rates (0, 67, 134, 201, and 268 kg N ha-1) applied to corn. Our objectives were to: (a) quantify model prediction accuracy before and after calibration, and report calibration steps; (b) compare crop model-based techniques in estimating optimal N rate for corn; and (c) utilize the calibrated model to explain factors causing year to year variability in yield and optimal N. Results indicated that the model simulated well long-term crop yields response to N (relative root mean square error, RRMSE of 19.6% before and 12.3% after calibration), which provided strong evidence that important soil and crop processes were accounted for in the model. The prediction of EONR was more complex and had greater uncertainty than the prediction of crop yield (RRMSE of 44.5% before and 36.6% after calibration). For long-term site mean EONR predictions, both calibrated and uncalibrated versions can be used as the 16-year mean differences in EONR’s were within the historical N rate error range (40–50 kg N ha-1). However, for accurate year-by-year simulation of EONR the calibrated version should be used. Model analysis revealed that higher EONR values in years with above normal spring precipitation were caused by an exponential increase in N loss (denitrification and leaching) with precipitation. We concluded that long-term experimental data were valuable in testing and refining APSIM predictions. The model can be used as a tool to assist N management guidelines in the US Midwest and we identified five avenues on how the model can add value toward agronomic, economic, and environmental sustainability. PMID:27891133
Modeling Long-Term Corn Yield Response to Nitrogen Rate and Crop Rotation.
Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Dietzel, Ranae; Poffenbarger, Hanna; Castellano, Michael J; Moore, Kenneth J; Thorburn, Peter; Archontoulis, Sotirios V
2016-01-01
Improved prediction of optimal N fertilizer rates for corn ( Zea mays L. ) can reduce N losses and increase profits. We tested the ability of the Agricultural Production Systems sIMulator (APSIM) to simulate corn and soybean ( Glycine max L. ) yields, the economic optimum N rate (EONR) using a 16-year field-experiment dataset from central Iowa, USA that included two crop sequences (continuous corn and soybean-corn) and five N fertilizer rates (0, 67, 134, 201, and 268 kg N ha -1 ) applied to corn. Our objectives were to: (a) quantify model prediction accuracy before and after calibration, and report calibration steps; (b) compare crop model-based techniques in estimating optimal N rate for corn; and (c) utilize the calibrated model to explain factors causing year to year variability in yield and optimal N. Results indicated that the model simulated well long-term crop yields response to N (relative root mean square error, RRMSE of 19.6% before and 12.3% after calibration), which provided strong evidence that important soil and crop processes were accounted for in the model. The prediction of EONR was more complex and had greater uncertainty than the prediction of crop yield (RRMSE of 44.5% before and 36.6% after calibration). For long-term site mean EONR predictions, both calibrated and uncalibrated versions can be used as the 16-year mean differences in EONR's were within the historical N rate error range (40-50 kg N ha -1 ). However, for accurate year-by-year simulation of EONR the calibrated version should be used. Model analysis revealed that higher EONR values in years with above normal spring precipitation were caused by an exponential increase in N loss (denitrification and leaching) with precipitation. We concluded that long-term experimental data were valuable in testing and refining APSIM predictions. The model can be used as a tool to assist N management guidelines in the US Midwest and we identified five avenues on how the model can add value toward agronomic, economic, and environmental sustainability.
The remarkable ability of turbulence model equations to describe transition
NASA Technical Reports Server (NTRS)
Wilcox, David C.
1992-01-01
This paper demonstrates how well the k-omega turbulence model describes the nonlinear growth of flow instabilities from laminar flow into the turbulent flow regime. Viscous modifications are proposed for the k-omega model that yield close agreement with measurements and with Direct Numerical Simulation results for channel and pipe flow. These modifications permit prediction of subtle sublayer details such as maximum dissipation at the surface, k approximately y(exp 2) as y approaches 0, and the sharp peak value of k near the surface. With two transition specific closure coefficients, the model equations accurately predict transition for an incompressible flat-plate boundary layer. The analysis also shows why the k-epsilon model is so difficult to use for predicting transition.
Brownian systems with spatially inhomogeneous activity
NASA Astrophysics Data System (ADS)
Sharma, A.; Brader, J. M.
2017-09-01
We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.
NASA Astrophysics Data System (ADS)
Arshad, Muhammad; Ullah, Saleem; Khurshid, Khurram; Ali, Asad
2017-10-01
Leaf Water Content (LWC) is an essential constituent of plant leaves that determines vegetation heath and its productivity. An accurate and on-time measurement of water content is crucial for planning irrigation, forecasting drought and predicting woodland fire. The retrieval of LWC from Visible to Shortwave Infrared (VSWIR: 0.4-2.5 μm) has been extensively investigated but little has been done in the Mid and Thermal Infrared (MIR and TIR: 2.50 -14.0 μm), windows of electromagnetic spectrum. This study is mainly focused on retrieval of LWC from Mid and Thermal Infrared, using Genetic Algorithm integrated with Partial Least Square Regression (PLSR). Genetic Algorithm fused with PLSR selects spectral wavebands with high predictive performance i.e., yields high adjusted-R2 and low RMSE. In our case, GA-PLSR selected eight variables (bands) and yielded highly accurate models with adjusted-R2 of 0.93 and RMSEcv equal to 7.1 %. The study also demonstrated that MIR is more sensitive to the variation in LWC as compared to TIR. However, the combined use of MIR and TIR spectra enhances the predictive performance in retrieval of LWC. The integration of Genetic Algorithm and PLSR, not only increases the estimation precision by selecting the most sensitive spectral bands but also helps in identifying the important spectral regions for quantifying water stresses in vegetation. The findings of this study will allow the future space missions (like HyspIRI) to position wavebands at sensitive regions for characterizing vegetation stresses.
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay or compromise dormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budburst dates only, with no information on the dormancy break date because this information is very scarce. We evaluated the efficiency of a set of process-based phenological models to accurately predict the dormancy break dates of four fruit trees. Our results show that models calibrated solely with flowering or budburst dates do not accurately predict the dormancy break date. Providing dormancy break date for the model parameterization results in much more accurate simulation of this latter, with however a higher error than that on flowering or bud break dates. But most importantly, we show also that models not calibrated with dormancy break dates can generate significant differences in forecasted flowering or bud break dates when using climate scenarios. Our results claim for the urgent need of massive measurements of dormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future.
Progress in hypersonic turbulence modeling
NASA Technical Reports Server (NTRS)
Wilcox, David C.
1991-01-01
A compressibility modification is developed for k-omega (Wilcox, 1988) and k-epsilon (Jones and Launder, 1972) models, that is similar to those of Sarkar et al. (1989) and Zeman (1990). Results of the perturbation solution for the compressible wall layer demonstrate why the Sarkar and Zeman terms yield inaccurate skin friction for the flat-plate boundary layer. A new compressibility term is developed which permits accurate predictions of the compressible mixing layer, flat-plate boundary layer, and shock separated flows.
Temperature-Dependent Kinetic Model for Nitrogen-Limited Wine Fermentations▿
Coleman, Matthew C.; Fish, Russell; Block, David E.
2007-01-01
A physical and mathematical model for wine fermentation kinetics was adapted to include the influence of temperature, perhaps the most critical factor influencing fermentation kinetics. The model was based on flask-scale white wine fermentations at different temperatures (11 to 35°C) and different initial concentrations of sugar (265 to 300 g/liter) and nitrogen (70 to 350 mg N/liter). The results show that fermentation temperature and inadequate levels of nitrogen will cause stuck or sluggish fermentations. Model parameters representing cell growth rate, sugar utilization rate, and the inactivation rate of cells in the presence of ethanol are highly temperature dependent. All other variables (yield coefficient of cell mass to utilized nitrogen, yield coefficient of ethanol to utilized sugar, Monod constant for nitrogen-limited growth, and Michaelis-Menten-type constant for sugar transport) were determined to vary insignificantly with temperature. The resulting mathematical model accurately predicts the observed wine fermentation kinetics with respect to different temperatures and different initial conditions, including data from fermentations not used for model development. This is the first wine fermentation model that accurately predicts a transition from sluggish to normal to stuck fermentations as temperature increases from 11 to 35°C. Furthermore, this comprehensive model provides insight into combined effects of time, temperature, and ethanol concentration on yeast (Saccharomyces cerevisiae) activity and physiology. PMID:17616615
Liu, Xiaojun; Zhang, Ke; Zhang, Zeyu; Cao, Qiang; Lv, Zunfu; Yuan, Zhaofeng; Tian, Yongchao; Cao, Weixing; Zhu, Yan
2017-01-01
Canopy chlorophyll density (Chl) has a pivotal role in diagnosing crop growth and nutrition status. The purpose of this study was to develop Chl based models for estimating N status and predicting grain yield of rice (Oryza sativa L.) with Leaf area index (LAI) and Chlorophyll concentration of the upper leaves. Six field experiments were conducted in Jiangsu Province of East China during 2007, 2008, 2009, 2013, and 2014. Different N rates were applied to generate contrasting conditions of N availability in six Japonica cultivars (9915, 27123, Wuxiangjing 14, Wuyunjing 19, Yongyou 8, and Wuyunjing 24) and two Indica cultivars (Liangyoupei 9, YLiangyou 1). The SPAD values of the four uppermost leaves and LAI were measured from tillering to flowering growth stages. Two N indicators, leaf N accumulation (LNA) and plant N accumulation (PNA) were measured. The LAI estimated by LAI-2000 and LI-3050C were compared and calibrated with a conversion equation. A linear regression analysis showed significant relationships between Chl value and N indicators, the equations were as follows: PNA = (0.092 × Chl) − 1.179 (R2 = 0.94, P < 0.001, relative root mean square error (RRMSE) = 0.196), LNA = (0.052 × Chl) − 0.269 (R2 = 0.93, P < 0.001, RRMSE = 0.185). Standardized method was used to quantity the correlation between Chl value and grain yield, normalized yield = (0.601 × normalized Chl) + 0.400 (R2 = 0.81, P < 0.001, RRMSE = 0.078). Independent experimental data also validated the use of Chl value to accurately estimate rice N status and predict grain yield. PMID:29163568
NASA Astrophysics Data System (ADS)
Du, J.; Kimball, J. S.; Jones, L. A.; Watts, J. D.
2016-12-01
Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical crops will be explored. Those impacts include discussions on productivity, yield, and influences on carbon and energy fluxes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Deukwoo; Little, Mark P.; Miller, Donald L.
Purpose: To determine more accurate regression formulas for estimating peak skin dose (PSD) from reference air kerma (RAK) or kerma-area product (KAP). Methods: After grouping of the data from 21 procedures into 13 clinically similar groups, assessments were made of optimal clustering using the Bayesian information criterion to obtain the optimal linear regressions of (log-transformed) PSD vs RAK, PSD vs KAP, and PSD vs RAK and KAP. Results: Three clusters of clinical groups were optimal in regression of PSD vs RAK, seven clusters of clinical groups were optimal in regression of PSD vs KAP, and six clusters of clinical groupsmore » were optimal in regression of PSD vs RAK and KAP. Prediction of PSD using both RAK and KAP is significantly better than prediction of PSD with either RAK or KAP alone. The regression of PSD vs RAK provided better predictions of PSD than the regression of PSD vs KAP. The partial-pooling (clustered) method yields smaller mean squared errors compared with the complete-pooling method.Conclusion: PSD distributions for interventional radiology procedures are log-normal. Estimates of PSD derived from RAK and KAP jointly are most accurate, followed closely by estimates derived from RAK alone. Estimates of PSD derived from KAP alone are the least accurate. Using a stochastic search approach, it is possible to cluster together certain dissimilar types of procedures to minimize the total error sum of squares.« less
Accurate van der Waals coefficients from density functional theory
Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn
2012-01-01
The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765
Agricultural Productivity Forecasts for Improved Drought Monitoring
NASA Technical Reports Server (NTRS)
Limaye, Ashutosh; McNider, Richard; Moss, Donald; Alhamdan, Mohammad
2010-01-01
Water stresses on agricultural crops during critical phases of crop phenology (such as grain filling) has higher impact on the eventual yield than at other times of crop growth. Therefore farmers are more concerned about water stresses in the context of crop phenology than the meteorological droughts. However the drought estimates currently produced do not account for the crop phenology. US Department of Agriculture (USDA) and National Oceanic and Atmospheric Administration (NOAA) have developed a drought monitoring decision support tool: The U.S. Drought Monitor, which currently uses meteorological droughts to delineate and categorize drought severity. Output from the Drought Monitor is used by the States to make disaster declarations. More importantly, USDA uses the Drought Monitor to make estimates of crop yield to help the commodities market. Accurate estimation of corn yield is especially critical given the recent trend towards diversion of corn to produce ethanol. Ethanol is fast becoming a standard 10% ethanol additive to petroleum products, the largest traded commodity. Thus the impact of large-scale drought will have dramatic impact on the petroleum prices as well as on food prices. USDA's World Agricultural Outlook Board (WAOB) serves as a focal point for economic intelligence and the commodity outlook for U.S. WAOB depends on Drought Monitor and has emphatically stated that accurate and timely data are needed in operational agrometeorological services to generate reliable projections for agricultural decision makers. Thus, improvements in the prediction of drought will reflect in early and accurate assessment of crop yields, which in turn will improve commodity projections. We have developed a drought assessment tool, which accounts for the water stress in the context of crop phenology. The crop modeling component is done using various crop modules within Decision Support System for Agrotechnology Transfer (DSSAT). DSSAT is an agricultural crop simulation system, which integrates the effects of soil, crop phenotype, weather, and management options. It has been in use for more than 15 years by researchers, growers and has become a de-facto standard in crop modeling communities spanning over 100 countries. The meteorological forcings to DSSAT are provided by NASA s National Land Data Assimilation System (NLDAS) datasets. NLDAS is a framework that incorporates atmospheric forcing and land parameter values along with land surface models to diagnose and predict the state of the land surface.
Using electrical impedance to predict catheter-endocardial contact during RF cardiac ablation.
Cao, Hong; Tungjitkusolmun, Supan; Choy, Young Bin; Tsai, Jang-Zern; Vorperian, Vicken R; Webster, John G
2002-03-01
During radio-frequency (RF) cardiac catheter ablation, there is little information to estimate the contact between the catheter tip electrode and endocardium because only the metal electrode shows up under fluoroscopy. We present a method that utilizes the electrical impedance between the catheter electrode and the dispersive electrode to predict the catheter tip electrode insertion depth into the endocardium. Since the resistivity of blood differs from the resistivity of the endocardium, the impedance increases as the catheter tip lodges deeper in the endocardium. In vitro measurements yielded the impedance-depth relations at 1, 10, 100, and 500 kHz. We predict the depth by spline curve interpolation using the obtained calibration curve. This impedance method gives reasonably accurate predicted depth. We also evaluated alternative methods, such as impedance difference and impedance ratio.
NASA Astrophysics Data System (ADS)
Zhang, M.; Nunes, V. D.; Burbey, T. J.; Borggaard, J.
2012-12-01
More than 1.5 m of subsidence has been observed in Las Vegas Valley since 1935 as a result of groundwater pumping that commenced in 1905 (Bell, 2002). The compaction of the aquifer system has led to several large subsidence bowls and deleterious earth fissures. The highly heterogeneous aquifer system with its variably thick interbeds makes predicting the magnitude and location of subsidence extremely difficult. Several numerical groundwater flow models of the Las Vegas basin have been previously developed; however none of them have been able to accurately simulate the observed subsidence patterns or magnitudes because of inadequate parameterization. To better manage groundwater resources and predict future subsidence we have updated and developed a more accurate groundwater management model for Las Vegas Valley by developing a new adjoint parameter estimation package (APE) that is used in conjunction with UCODE along with MODFLOW and the SUB (subsidence) and HFB (horizontal flow barrier) packages. The APE package is used with UCODE to automatically identify suitable parameter zonations and inversely calculate parameter values from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Ske) and inelastic (Skv) storage coefficients. With the advent of InSAR (Interferometric synthetic aperture radar), distributed spatial and temporal subsidence measurements can be obtained, which greatly enhance the accuracy of parameter estimation. This automation process can remove user bias and provide a far more accurate and robust parameter zonation distribution. The outcome of this work yields a more accurate and powerful tool for managing groundwater resources in Las Vegas Valley to date.
Total reaction cross sections in CEM and MCNP6 at intermediate energies
Kerby, Leslie M.; Mashnik, Stepan G.
2015-05-14
Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less
Prediction of far-field wind turbine noise propagation with parabolic equation.
Lee, Seongkyu; Lee, Dongjai; Honhoff, Saskia
2016-08-01
Sound propagation of wind farms is typically simulated by the use of engineering tools that are neglecting some atmospheric conditions and terrain effects. Wind and temperature profiles, however, can affect the propagation of sound and thus the perceived sound in the far field. A better understanding and application of those effects would allow a more optimized farm operation towards meeting noise regulations and optimizing energy yield. This paper presents the parabolic equation (PE) model development for accurate wind turbine noise propagation. The model is validated against analytic solutions for a uniform sound speed profile, benchmark problems for nonuniform sound speed profiles, and field sound test data for real environmental acoustics. It is shown that PE provides good agreement with the measured data, except upwind propagation cases in which turbulence scattering is important. Finally, the PE model uses computational fluid dynamics results as input to accurately predict sound propagation for complex flows such as wake flows. It is demonstrated that wake flows significantly modify the sound propagation characteristics.
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
Total reaction cross sections in CEM and MCNP6 at intermediate energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie M.; Mashnik, Stepan G.
Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less
Predicting human blood viscosity in silico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedosov, Dmitry A.; Pan, Wenxiao; Caswell, Bruce
2011-07-05
Cellular suspensions such as blood are a part of living organisms and their rheological and flow characteristics determine and affect majority of vital functions. The rheological and flow properties of cell suspensions are determined by collective dynamics of cells, their structure or arrangement, cell properties and interactions. We study these relations for blood in silico using a mesoscopic particle-based method and two different models (multi-scale/low-dimensional) of red blood cells. The models yield accurate quantitative predictions of the dependence of blood viscosity on shear rate and hematocrit. We explicitly model cell aggregation interactions and demonstrate the formation of reversible rouleaux structuresmore » resulting in a tremendous increase of blood viscosity at low shear rates and yield stress, in agreement with experiments. The non-Newtonian behavior of such cell suspensions (e.g., shear thinning, yield stress) is analyzed and related to the suspension’s microstructure, deformation and dynamics of single cells. We provide the flrst quantitative estimates of normal stress differences and magnitude of aggregation forces in blood. Finally, the flexibility of the cell models allows them to be employed for quantitative analysis of a much wider class of complex fluids including cell, capsule, and vesicle suspensions.« less
Estimation of dew yield from radiative condensers by means of an energy balance model
NASA Astrophysics Data System (ADS)
Maestre-Valero, J. F.; Ragab, R.; Martínez-Alvarez, V.; Baille, A.
2012-08-01
SummaryThis paper presents an energy balance modelling approach to predict the nightly water yield and the surface temperature (Tf) of two passive radiative dew condensers (RDCs) tilted 30° from horizontal. One was fitted with a white hydrophilic polyethylene foil recommended for dew harvest and the other with a black polyethylene foil widely used in horticulture. The model was validated in south-eastern Spain by comparing the simulation outputs with field measurements of Tf and dew yield. The results indicate that the model is robust and accurate in reproducing the behaviour of the two RDCs, especially in what refers to Tf, whose estimates were very close to the observations. The results were somewhat less precise for dew yield, with a larger scatter around the 1:1 relationship. A sensitivity analysis showed that the simulated dew yield was highly sensitive to changes in relative humidity and downward longwave radiation. The proposed approach provides a useful tool to water managers for quantifying the amount of dew that could be harvested as a valuable water resource in arid, semiarid and water stressed regions.
Evaluation of the Williams-type spring wheat model in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Leduc, S. (Principal Investigator)
1982-01-01
The Williams type model, developed similarly to previous models of C.V.D. Williams, uses monthly temperature and precipitation data as well as soil and topological variables to predict the yield of the spring wheat crop. The models are statistically developed using the regression technique. Eight model characteristics are examined in the evaluation of the model. Evaluation is at the crop reporting district level, the state level and for the entire region. A ten year bootstrap test was the basis of the statistical evaluation. The accuracy and current indication of modeled yield reliability could show improvement. There is great variability in the bias measured over the districts, but there is a slight overall positive bias. The model estimates for the east central crop reporting district in Minnesota are not accurate. The estimate of yield for 1974 were inaccurate for all of the models.
Blainey, Joan B.; Ferré, Ty P.A.; Cordova, Jeffrey T.
2007-01-01
Pumping of an unconfined aquifer can cause local desaturation detectable with high‐resolution gravimetry. A previous study showed that signal‐to‐noise ratios could be predicted for gravity measurements based on a hydrologic model. We show that although changes should be detectable with gravimeters, estimations of hydraulic conductivity and specific yield based on gravity data alone are likely to be unacceptably inaccurate and imprecise. In contrast, a transect of low‐quality drawdown data alone resulted in accurate estimates of hydraulic conductivity and inaccurate and imprecise estimates of specific yield. Combined use of drawdown and gravity data, or use of high‐quality drawdown data alone, resulted in unbiased and precise estimates of both parameters. This study is an example of the value of a staged assessment regarding the likely significance of a new measurement method or monitoring scenario before collecting field data.
Montesinos-López, Abelardo; Montesinos-López, Osval A; Cuevas, Jaime; Mata-López, Walter A; Burgueño, Juan; Mondal, Sushismita; Huerta, Julio; Singh, Ravi; Autrique, Enrique; González-Pérez, Lorena; Crossa, José
2017-01-01
Modern agriculture uses hyperspectral cameras that provide hundreds of reflectance data at discrete narrow bands in many environments. These bands often cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra. With the bands, vegetation indices are constructed for predicting agronomically important traits such as grain yield and biomass. However, since vegetation indices only use some wavelengths (referred to as bands), we propose using all bands simultaneously as predictor variables for the primary trait grain yield; results of several multi-environment maize (Aguate et al. in Crop Sci 57(5):1-8, 2017) and wheat (Montesinos-López et al. in Plant Methods 13(4):1-23, 2017) breeding trials indicated that using all bands produced better prediction accuracy than vegetation indices. However, until now, these prediction models have not accounted for the effects of genotype × environment (G × E) and band × environment (B × E) interactions incorporating genomic or pedigree information. In this study, we propose Bayesian functional regression models that take into account all available bands, genomic or pedigree information, the main effects of lines and environments, as well as G × E and B × E interaction effects. The data set used is comprised of 976 wheat lines evaluated for grain yield in three environments (Drought, Irrigated and Reduced Irrigation). The reflectance data were measured in 250 discrete narrow bands ranging from 392 to 851 nm (nm). The proposed Bayesian functional regression models were implemented using two types of basis: B-splines and Fourier. Results of the proposed Bayesian functional regression models, including all the wavelengths for predicting grain yield, were compared with results from conventional models with and without bands. We observed that the models with B × E interaction terms were the most accurate models, whereas the functional regression models (with B-splines and Fourier basis) and the conventional models performed similarly in terms of prediction accuracy. However, the functional regression models are more parsimonious and computationally more efficient because the number of beta coefficients to be estimated is 21 (number of basis), rather than estimating the 250 regression coefficients for all bands. In this study adding pedigree or genomic information did not increase prediction accuracy.
Investigating Some Technical Issues on Cohesive Zone Modeling of Fracture
NASA Technical Reports Server (NTRS)
Wang, John T.
2011-01-01
This study investigates some technical issues related to the use of cohesive zone models (CZMs) in modeling fracture processes. These issues include: why cohesive laws of different shapes can produce similar fracture predictions; under what conditions CZM predictions have a high degree of agreement with linear elastic fracture mechanics (LEFM) analysis results; when the shape of cohesive laws becomes important in the fracture predictions; and why the opening profile along the cohesive zone length needs to be accurately predicted. Two cohesive models were used in this study to address these technical issues. They are the linear softening cohesive model and the Dugdale perfectly plastic cohesive model. Each cohesive model constitutes five cohesive laws of different maximum tractions. All cohesive laws have the same cohesive work rate (CWR) which is defined by the area under the traction-separation curve. The effects of the maximum traction on the cohesive zone length and the critical remote applied stress are investigated for both models. For a CZM to predict a fracture load similar to that obtained by an LEFM analysis, the cohesive zone length needs to be much smaller than the crack length, which reflects the small scale yielding condition requirement for LEFM analysis to be valid. For large-scale cohesive zone cases, the predicted critical remote applied stresses depend on the shape of cohesive models used and can significantly deviate from LEFM results. Furthermore, this study also reveals the importance of accurately predicting the cohesive zone profile in determining the critical remote applied load.
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Simulating effects of microtopography on wetland specific yield and hydroperiod
Summer, David M.; Wang, Xixi
2011-01-01
Specific yield and hydroperiod have proven to be useful parameters in hydrologic analysis of wetlands. Specific yield is a critical parameter to quantitatively relate hydrologic fluxes (e.g., rainfall, evapotranspiration, and runoff) and water level changes. Hydroperiod measures the temporal variability and frequency of land-surface inundation. Conventionally, hydrologic analyses used these concepts without considering the effects of land surface microtopography and assumed a smoothly-varying land surface. However, these microtopographic effects could result in small-scale variations in land surface inundation and water depth above or below the land surface, which in turn affect ecologic and hydrologic processes of wetlands. The objective of this chapter is to develop a physically-based approach for estimating specific yield and hydroperiod that enables the consideration of microtopographic features of wetlands, and to illustrate the approach at sites in the Florida Everglades. The results indicate that the physically-based approach can better capture the variations of specific yield with water level, in particular when the water level falls between the minimum and maximum land surface elevations. The suggested approach for hydroperiod computation predicted that the wetlands might be completely dry or completely wet much less frequently than suggested by the conventional approach neglecting microtopography. One reasonable generalization may be that the hydroperiod approaches presented in this chapter can be a more accurate prediction tool for water resources management to meet the specific hydroperiod threshold as required by a species of plant or animal of interest.
A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules.
Ramakrishnan, Sridhar; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques
2016-01-01
Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss-from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges-and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. © 2016 Associated Professional Sleep Societies, LLC.
Cylinders out of a top hat: counts-in-cells for projected densities
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon
2018-06-01
Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.
Refining metabolic models and accounting for regulatory effects.
Kim, Joonhoon; Reed, Jennifer L
2014-10-01
Advances in genome-scale metabolic modeling allow us to investigate and engineer metabolism at a systems level. Metabolic network reconstructions have been made for many organisms and computational approaches have been developed to convert these reconstructions into predictive models. However, due to incomplete knowledge these reconstructions often have missing or extraneous components and interactions, which can be identified by reconciling model predictions with experimental data. Recent studies have provided methods to further improve metabolic model predictions by incorporating transcriptional regulatory interactions and high-throughput omics data to yield context-specific metabolic models. Here we discuss recent approaches for resolving model-data discrepancies and building context-specific metabolic models. Once developed highly accurate metabolic models can be used in a variety of biotechnology applications. Copyright © 2014 Elsevier Ltd. All rights reserved.
Viallon, Vivian; Latouche, Aurélien
2011-03-01
Finding out biomarkers and building risk scores to predict the occurrence of survival outcomes is a major concern of clinical epidemiology, and so is the evaluation of prognostic models. In this paper, we are concerned with the estimation of the time-dependent AUC--area under the receiver-operating curve--which naturally extends standard AUC to the setting of survival outcomes and enables to evaluate the discriminative power of prognostic models. We establish a simple and useful relation between the predictiveness curve and the time-dependent AUC--AUC(t). This relation confirms that the predictiveness curve is the key concept for evaluating calibration and discrimination of prognostic models. It also highlights that accurate estimates of the conditional absolute risk function should yield accurate estimates for AUC(t). From this observation, we derive several estimators for AUC(t) relying on distinct estimators of the conditional absolute risk function. An empirical study was conducted to compare our estimators with the existing ones and assess the effect of model misspecification--when estimating the conditional absolute risk function--on the AUC(t) estimation. We further illustrate the methodology on the Mayo PBC and the VA lung cancer data sets. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
Ground-Laboratory to In-Space Atomic Oxygen Correlation for the PEACE Polymers
NASA Astrophysics Data System (ADS)
Stambler, Arielle H.; Inoshita, Karen E.; Roberts, Lily M.; Barbagallo, Claire E.; de Groh, Kim K.; Banks, Bruce A.
2009-01-01
The Materials International Space Station Experiment 2 (MISSE 2) Polymer Erosion and Contamination Experiment (PEACE) polymers were exposed to the environment of low Earth orbit (LEO) for 3.95 years from 2001 to 2005. There were forty-one different PEACE polymers, which were flown on the exterior of the International Space Station (ISS) in order to determine their atomic oxygen erosion yields. In LEO, atomic oxygen is an environmental durability threat, particularly for long duration mission exposures. Although space flight experiments, such as the MISSE 2 PEACE experiment, are ideal for determining LEO environmental durability of spacecraft materials, ground-laboratory testing is often relied upon for durability evaluation and prediction. Unfortunately, significant differences exist between LEO atomic oxygen exposure and atomic oxygen exposure in ground-laboratory facilities. These differences include variations in species, energies, thermal exposures and radiation exposures, all of which may result in different reactions and erosion rates. In an effort to improve the accuracy of ground-based durability testing, ground-laboratory to in-space atomic oxygen correlation experiments have been conducted. In these tests, the atomic oxygen erosion yields of the PEACE polymers were determined relative to Kapton H using a radio-frequency (RF) plasma asher (operated on air). The asher erosion yields were compared to the MISSE 2 PEACE erosion yields to determine the correlation between erosion rates in the two environments. This paper provides a summary of the MISSE 2 PEACE experiment; it reviews the specific polymers tested as well as the techniques used to determine erosion yield in the asher, and it provides a correlation between the space and ground-laboratory erosion yield values. Using the PEACE polymers' asher to in-space erosion yield ratios will allow more accurate in-space materials performance predictions to be made based on plasma asher durability evaluation.
Ensembles modeling approach to study Climate Change impacts on Wheat
NASA Astrophysics Data System (ADS)
Ahmed, Mukhtar; Claudio, Stöckle O.; Nelson, Roger; Higgins, Stewart
2017-04-01
Simulations of crop yield under climate variability are subject to uncertainties, and quantification of such uncertainties is essential for effective use of projected results in adaptation and mitigation strategies. In this study we evaluated the uncertainties related to crop-climate models using five crop growth simulation models (CropSyst, APSIM, DSSAT, STICS and EPIC) and 14 general circulation models (GCMs) for 2 representative concentration pathways (RCP) of atmospheric CO2 (4.5 and 8.5 W m-2) in the Pacific Northwest (PNW), USA. The aim was to assess how different process-based crop models could be used accurately for estimation of winter wheat growth, development and yield. Firstly, all models were calibrated for high rainfall, medium rainfall, low rainfall and irrigated sites in the PNW using 1979-2010 as the baseline period. Response variables were related to farm management and soil properties, and included crop phenology, leaf area index (LAI), biomass and grain yield of winter wheat. All five models were run from 2000 to 2100 using the 14 GCMs and 2 RCPs to evaluate the effect of future climate (rainfall, temperature and CO2) on winter wheat phenology, LAI, biomass, grain yield and harvest index. Simulated time to flowering and maturity was reduced in all models except EPIC with some level of uncertainty. All models generally predicted an increase in biomass and grain yield under elevated CO2 but this effect was more prominent under rainfed conditions than irrigation. However, there was uncertainty in the simulation of crop phenology, biomass and grain yield under 14 GCMs during three prediction periods (2030, 2050 and 2070). We concluded that to improve accuracy and consistency in simulating wheat growth dynamics and yield under a changing climate, a multimodel ensemble approach should be used.
NASA Astrophysics Data System (ADS)
Franch, B.; Vermote, E.; Roger, J. C.; Skakun, S.; Becker-Reshef, I.; Justice, C. O.
2017-12-01
Accurate and timely crop yield forecasts are critical for making informed agricultural policies and investments, as well as increasing market efficiency and stability. In Becker-Reshef et al. (2010) and Franch et al. (2015) we developed an empirical generalized model for forecasting winter wheat yield. It is based on the relationship between the Normalized Difference Vegetation Index (NDVI) at the peak of the growing season and the Growing Degree Day (GDD) information extracted from NCEP/NCAR reanalysis data. These methods were applied to MODIS CMG data in Ukraine, the US and China with errors around 10%. However, the NDVI is saturated for yield values higher than 4 MT/ha. As a consequence, the model had to be re-calibrated in each country and the validation of the national yields showed low correlation coefficients. In this study we present a new model based on the extrapolation of the pure wheat signal (100% of wheat within the pixel) from MODIS data at 1km resolution and using the Difference Vegetation Index (DVI). The model has been applied to monitor the national yield of winter wheat in the United States and Ukraine from 2001 to 2016.
NASA Astrophysics Data System (ADS)
Gregoire, Alexandre David
2011-07-01
The goal of this research was to accurately predict the ultimate compressive load of impact damaged graphite/epoxy coupons using a Kohonen self-organizing map (SOM) neural network and multivariate statistical regression analysis (MSRA). An optimized use of these data treatment tools allowed the generation of a simple, physically understandable equation that predicts the ultimate failure load of an impacted damaged coupon based uniquely on the acoustic emissions it emits at low proof loads. Acoustic emission (AE) data were collected using two 150 kHz resonant transducers which detected and recorded the AE activity given off during compression to failure of thirty-four impacted 24-ply bidirectional woven cloth laminate graphite/epoxy coupons. The AE quantification parameters duration, energy and amplitude for each AE hit were input to the Kohonen self-organizing map (SOM) neural network to accurately classify the material failure mechanisms present in the low proof load data. The number of failure mechanisms from the first 30% of the loading for twenty-four coupons were used to generate a linear prediction equation which yielded a worst case ultimate load prediction error of 16.17%, just outside of the +/-15% B-basis allowables, which was the goal for this research. Particular emphasis was placed upon the noise removal process which was largely responsible for the accuracy of the results.
Predicting bioactive conformations and binding modes of macrocycles
NASA Astrophysics Data System (ADS)
Anighoro, Andrew; de la Vega de León, Antonio; Bajorath, Jürgen
2016-10-01
Macrocyclic compounds experience increasing interest in drug discovery. It is often thought that these large and chemically complex molecules provide promising candidates to address difficult targets and interfere with protein-protein interactions. From a computational viewpoint, these molecules are difficult to treat. For example, flexible docking of macrocyclic compounds is hindered by the limited ability of current docking approaches to optimize conformations of extended ring systems for pose prediction. Herein, we report predictions of bioactive conformations of macrocycles using conformational search and binding modes using docking. Conformational ensembles generated using specialized search technique of about 70 % of the tested macrocycles contained accurate bioactive conformations. However, these conformations were difficult to identify on the basis of conformational energies. Moreover, docking calculations with limited ligand flexibility starting from individual low energy conformations rarely yielded highly accurate binding modes. In about 40 % of the test cases, binding modes were approximated with reasonable accuracy. However, when conformational ensembles were subjected to rigid body docking, an increase in meaningful binding mode predictions to more than 50 % of the test cases was observed. Electrostatic effects did not contribute to these predictions in a positive or negative manner. Rather, achieving shape complementarity at macrocycle-target interfaces was a decisive factor. In summary, a combined computational protocol using pre-computed conformational ensembles of macrocycles as a starting point for docking shows promise in modeling binding modes of macrocyclic compounds.
High-Temperature Cast Aluminum for Efficient Engines
NASA Astrophysics Data System (ADS)
Bobel, Andrew C.
Accurate thermodynamic databases are the foundation of predictive microstructure and property models. An initial assessment of the commercially available Thermo-Calc TCAL2 database and the proprietary aluminum database of QuesTek demonstrated a large degree of deviation with respect to equilibrium precipitate phase prediction in the compositional region of interest when compared to 3-D atom probe tomography (3DAPT) and transmission electron microscopy (TEM) experimental results. New compositional measurements of the Q-phase (Al-Cu-Mg-Si phase) led to a remodeling of the Q-phase thermodynamic description in the CALPHAD databases which has produced significant improvements in the phase prediction capabilities of the thermodynamic model. Due to the unique morphologies of strengthening precipitate phases commonly utilized in high-strength cast aluminum alloys, the development of new microstructural evolution models to describe both rod and plate particle growth was critical for accurate mechanistic strength models which rely heavily on precipitate size and shape. Particle size measurements through both 3DAPT and TEM experiments were used in conjunction with literature results of many alloy compositions to develop a physical growth model for the independent prediction of rod radii and rod length evolution. In addition a machine learning (ML) model was developed for the independent prediction of plate thickness and plate diameter evolution as a function of alloy composition, aging temperature, and aging time. The developed models are then compared with physical growth laws developed for spheres and modified for ellipsoidal morphology effects. Analysis of the effect of particle morphology on strength enhancement has been undertaken by modification of the Orowan-Ashby equation for 〈110〉 alpha-Al oriented finite rods in addition to an appropriate version for similarly oriented plates. A mechanistic strengthening model was developed for cast aluminum alloys containing both rod and plate-like precipitates. The model accurately accounts for the temperature dependence of particle nucleation and growth, solid solution strengthening, Si eutectic strength, and base aluminum yield strength. Strengthening model predictions of tensile yield strength are in excellent agreement with experimental observations over a wide range of aluminum alloy systems, aging temperatures, and test conditions. The developed models enable the prediction of the required particle morphology and volume fraction necessary to achieve target property goals in the design of future aluminum alloys. The effect of partitioning elements to the Q-phase was also considered for the potential to control the nucleation rate, reduce coarsening, and control the evolution of particle morphology. Elements were selected based on density functional theory (DFT) calculations showing the prevalence of certain elements to partition to the Q-phase. 3DAPT experiments were performed on Q-phase containing wrought alloys with these additions and show segregation of certain elements to the Q-phase with relative agreement to DFT predictions.
Tkach, D C; Hargrove, L J
2013-01-01
Advances in battery and actuator technology have enabled clinical use of powered lower limb prostheses such as the BiOM Powered Ankle. To allow ambulation over various types of terrains, such devices rely on built-in mechanical sensors or manual actuation by the amputee to transition into an operational mode that is suitable for a given terrain. It is unclear if mechanical sensors alone can accurately modulate operational modes while voluntary actuation prevents seamless, naturalistic gait. Ensuring that the prosthesis is ready to accommodate new terrain types at first step is critical for user safety. EMG signals from patient's residual leg muscles may provide additional information to accurately choose the proper mode of prosthesis operation. Using a pattern recognition classifier we compared the accuracy of predicting 8 different mode transitions based on (1) prosthesis mechanical sensor output (2) EMG recorded from residual limb and (3) fusion of EMG and mechanical sensor data. Our findings indicate that the neuromechanical sensor fusion significantly decreases errors in predicting 10 mode transitions as compared to using either mechanical sensors or EMG alone (2.3±0.7% vs. 7.8±0.9% and 20.2±2.0% respectively).
Anomalous dissipation and kinetic-energy distribution in pipes at very high Reynolds numbers.
Chen, Xi; Wei, Bo-Bo; Hussain, Fazle; She, Zhen-Su
2016-01-01
A symmetry-based theory is developed for the description of (streamwise) kinetic energy K in turbulent pipes at extremely high Reynolds numbers (Re's). The theory assumes a mesolayer with continual deformation of wall-attached eddies which introduce an anomalous dissipation, breaking the exact balance between production and dissipation. An outer peak of K is predicted above a critical Re of 10^{4}, in good agreement with experimental data. The theory offers an alternative explanation for the recently discovered logarithmic distribution of K. The concept of anomalous dissipation is further supported by a significant modification of the k-ω equation, yielding an accurate prediction of the entire K profile.
NASA Astrophysics Data System (ADS)
Nakano, Hayato; Hakoyama, Tomoyuki; Kuwabara, Toshihiko
2017-10-01
Hole expansion forming of a cold rolled steel sheet is investigated both experimentally and analytically to clarify the effects of material models on the predictive accuracy of finite element analyses (FEA). The multiaxial plastic deformation behavior of a cold rolled steel sheet with a thickness of 1.2 mm was measured using a servo-controlled multiaxial tube expansion testing machine for the range of strain from initial yield to fracture. Tubular specimens were fabricated from the sheet sample by roller bending and laser welding. Many linear stress paths in the first quadrant of stress space were applied to the tubular specimens to measure the contours of plastic work in stress space up to a reference plastic strain of 0.24 along with the directions of plastic strain rates. The anisotropic parameters and exponent of the Yld2000-2d yield function (Barlat et al., 2003) were optimized to approximate the contours of plastic work and the directions of plastic strain rates. The hole expansion forming simulations were performed using the different model identifications based on the Yld2000-2d yield function. It is concluded that the yield function best capturing both the plastic work contours and the directions of plastic strain rates leads to the most accurate predicted FEA.
Genomic prediction in a nuclear population of layers using single-step models.
Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning
2018-02-01
Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.
De Buck, Stefan S; Sinha, Vikash K; Fenu, Luca A; Nijsen, Marjoleen J; Mackie, Claire E; Gilissen, Ron A H J
2007-10-01
The aim of this study was to evaluate different physiologically based modeling strategies for the prediction of human pharmacokinetics. Plasma profiles after intravenous and oral dosing were simulated for 26 clinically tested drugs. Two mechanism-based predictions of human tissue-to-plasma partitioning (P(tp)) from physicochemical input (method Vd1) were evaluated for their ability to describe human volume of distribution at steady state (V(ss)). This method was compared with a strategy that combined predicted and experimentally determined in vivo rat P(tp) data (method Vd2). Best V(ss) predictions were obtained using method Vd2, providing that rat P(tp) input was corrected for interspecies differences in plasma protein binding (84% within 2-fold). V(ss) predictions from physicochemical input alone were poor (32% within 2-fold). Total body clearance (CL) was predicted as the sum of scaled rat renal clearance and hepatic clearance projected from in vitro metabolism data. Best CL predictions were obtained by disregarding both blood and microsomal or hepatocyte binding (method CL2, 74% within 2-fold), whereas strong bias was seen using both blood and microsomal or hepatocyte binding (method CL1, 53% within 2-fold). The physiologically based pharmacokinetics (PBPK) model, which combined methods Vd2 and CL2 yielded the most accurate predictions of in vivo terminal half-life (69% within 2-fold). The Gastroplus advanced compartmental absorption and transit model was used to construct an absorption-disposition model and provided accurate predictions of area under the plasma concentration-time profile, oral apparent volume of distribution, and maximum plasma concentration after oral dosing, with 74%, 70%, and 65% within 2-fold, respectively. This evaluation demonstrates that PBPK models can lead to reasonable predictions of human pharmacokinetics.
Complete fold annotation of the human proteome using a novel structural feature space
Middleton, Sarah A.; Illuminati, Joseph; Kim, Junhyong
2017-04-13
Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this methodmore » by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Finally, our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families.« less
NASA Astrophysics Data System (ADS)
Kopelevich, Dmitry I.
2013-10-01
Transport of a fullerene-like nanoparticle across a lipid bilayer is investigated by coarse-grained molecular dynamics (MD) simulations. Potentials of mean force (PMF) acting on the nanoparticle in a flexible bilayer suspended in water and a bilayer restrained to a flat surface are computed by constrained MD simulations. The rate of the nanoparticle transport into the bilayer interior is predicted using one-dimensional Langevin models based on these PMFs. The predictions are compared with the transport rates obtained from a series of direct (unconstrained) MD simulations of the solute transport into the flexible bilayer. It is observed that the PMF acting on the solute in the flexible membrane underestimates the transport rate by more than an order of magnitude while the PMF acting on the solute in the restrained membrane yields an accurate estimate of the activation energy for transport into the flexible membrane. This paradox is explained by a coexistence of metastable membrane configurations for a range of the solute positions inside and near the flexible membrane. This leads to a significant reduction of the contribution of the transition state to the mean force acting on the solute. Restraining the membrane shape ensures that there is only one stable membrane configuration corresponding to each solute position and thus the transition state is adequately represented in the PMF. This mechanism is quite general and thus this phenomenon is expected to occur in a wide range of interfacial systems. A simple model for the free energy landscape of the coupled solute-membrane system is proposed and validated. This model explicitly accounts for effects of the membrane deformations on the solute transport and yields an accurate prediction of the activation energy for the solute transport.
Vlachopoulos, Lazaros; Lüthi, Marcel; Carrillo, Fabio; Gerber, Christian; Székely, Gábor; Fürnstahl, Philipp
2018-04-18
In computer-assisted reconstructive surgeries, the contralateral anatomy is established as the best available reconstruction template. However, existing intra-individual bilateral differences or a pathological, contralateral humerus may limit the applicability of the method. The aim of the study was to evaluate whether a statistical shape model (SSM) has the potential to predict accurately the pretraumatic anatomy of the humerus from the posttraumatic condition. Three-dimensional (3D) triangular surface models were extracted from the computed tomographic data of 100 paired cadaveric humeri without a pathological condition. An SSM was constructed, encoding the characteristic shape variations among the individuals. To predict the patient-specific anatomy of the proximal (or distal) part of the humerus with the SSM, we generated segments of the humerus of predefined length excluding the part to predict. The proximal and distal humeral prediction (p-HP and d-HP) errors, defined as the deviation of the predicted (bone) model from the original (bone) model, were evaluated. For comparison with the state-of-the-art technique, i.e., the contralateral registration method, we used the same segments of the humerus to evaluate whether the SSM or the contralateral anatomy yields a more accurate reconstruction template. The p-HP error (mean and standard deviation, 3.8° ± 1.9°) using 85% of the distal end of the humerus to predict the proximal humeral anatomy was significantly smaller (p = 0.001) compared with the contralateral registration method. The difference between the d-HP error (mean, 5.5° ± 2.9°), using 85% of the proximal part of the humerus to predict the distal humeral anatomy, and the contralateral registration method was not significant (p = 0.61). The restoration of the humeral length was not significantly different between the SSM and the contralateral registration method. SSMs accurately predict the patient-specific anatomy of the proximal and distal aspects of the humerus. The prediction errors of the SSM depend on the size of the healthy part of the humerus. The prediction of the patient-specific anatomy of the humerus is of fundamental importance for computer-assisted reconstructive surgeries.
Technow, Frank; Messina, Carlos D; Totir, L Radu; Cooper, Mark
2015-01-01
Genomic selection, enabled by whole genome prediction (WGP) methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E), continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs) attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC), a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics.
Integrating Crop Growth Models with Whole Genome Prediction through Approximate Bayesian Computation
Technow, Frank; Messina, Carlos D.; Totir, L. Radu; Cooper, Mark
2015-01-01
Genomic selection, enabled by whole genome prediction (WGP) methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E), continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs) attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC), a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics. PMID:26121133
NASA Astrophysics Data System (ADS)
Wang, Xin; Li, Yan; Chen, Tongjun; Yan, Qiuyan; Ma, Li
2017-04-01
The thickness of tectonically deformed coal (TDC) has positive correlation associations with gas outbursts. In order to predict the TDC thickness of coal beds, we propose a new quantitative predicting method using an extreme learning machine (ELM) algorithm, a principal component analysis (PCA) algorithm, and seismic attributes. At first, we build an ELM prediction model using the PCA attributes of a synthetic seismic section. The results suggest that the ELM model can produce a reliable and accurate prediction of the TDC thickness for synthetic data, preferring Sigmoid activation function and 20 hidden nodes. Then, we analyze the applicability of the ELM model on the thickness prediction of the TDC with real application data. Through the cross validation of near-well traces, the results suggest that the ELM model can produce a reliable and accurate prediction of the TDC. After that, we use 250 near-well traces from 10 wells to build an ELM predicting model and use the model to forecast the TDC thickness of the No. 15 coal in the study area using the PCA attributes as the inputs. Comparing the predicted results, it is noted that the trained ELM model with two selected PCA attributes yields better predication results than those from the other combinations of the attributes. Finally, the trained ELM model with real seismic data have a different number of hidden nodes (10) than the trained ELM model with synthetic seismic data. In summary, it is feasible to use an ELM model to predict the TDC thickness using the calculated PCA attributes as the inputs. However, the input attributes, the activation function and the number of hidden nodes in the ELM model should be selected and tested carefully based on individual application.
A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules
Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques
2016-01-01
Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594
Opening Loads Analyses for Various Disk-Gap-Band Parachutes
NASA Technical Reports Server (NTRS)
Cruz, J. R.; Kandis, M.; Witkowski, A.
2003-01-01
Detailed opening loads data is presented for 18 tests of Disk-Gap-Band (DGB) parachutes of varying geometry with nominal diameters ranging from 43.2 to 50.1 ft. All of the test parachutes were deployed from a mortar. Six of these tests were conducted via drop testing with drop test vehicles weighing approximately 3,000 or 8,000 lb. Twelve tests were conducted in the National Full-Scale Aerodynamics Complex 80- by 120-foot wind tunnel at the NASA Ames Research Center. The purpose of these tests was to structurally qualify the parachute for the Mars Exploration Rover mission. A key requirement of all tests was that peak parachute load had to be reached at full inflation to more closely simulate the load profile encountered during operation at Mars. Peak loads measured during the tests were in the range from 12,889 to 30,027 lb. Of the two test methods, the wind tunnel tests yielded more accurate and repeatable data. Application of an apparent mass model to the opening loads data yielded insights into the nature of these loads. Although the apparent mass model could reconstruct specific tests with reasonable accuracy, the use of this model for predictive analyses was not accurate enough to set test conditions for either the drop or wind tunnel tests. A simpler empirical model was found to be suitable for predicting opening loads for the wind tunnel tests to a satisfactory level of accuracy. However, this simple empirical model is not applicable to the drop tests.
NASA Astrophysics Data System (ADS)
Prastuti, M.; Suhartono; Salehah, NA
2018-04-01
The need for energy supply, especially for electricity in Indonesia has been increasing in the last past years. Furthermore, the high electricity usage by people at different times leads to the occurrence of heteroscedasticity issue. Estimate the electricity supply that could fulfilled the community’s need is very important, but the heteroscedasticity issue often made electricity forecasting hard to be done. An accurate forecast of electricity consumptions is one of the key challenges for energy provider to make better resources and service planning and also take control actions in order to balance the electricity supply and demand for community. In this paper, hybrid ARIMAX Quantile Regression (ARIMAX-QR) approach was proposed to predict the short-term electricity consumption in East Java. This method will also be compared to time series regression using RMSE, MAPE, and MdAPE criteria. The data used in this research was the electricity consumption per half-an-hour data during the period of September 2015 to April 2016. The results show that the proposed approach can be a competitive alternative to forecast short-term electricity in East Java. ARIMAX-QR using lag values and dummy variables as predictors yield more accurate prediction in both in-sample and out-sample data. Moreover, both time series regression and ARIMAX-QR methods with addition of lag values as predictor could capture accurately the patterns in the data. Hence, it produces better predictions compared to the models that not use additional lag variables.
Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634
Nano-Scale Characterization of Al-Mg Nanocrystalline Alloys
NASA Astrophysics Data System (ADS)
Harvey, Evan; Ladani, Leila
Materials with nano-scale microstructure have become increasingly popular due to their benefit of substantially increased strengths. The increase in strength as a result of decreasing grain size is defined by the Hall-Petch equation. With increased interest in miniaturization of components, methods of mechanical characterization of small volumes of material are necessary because traditional means such as tensile testing becomes increasingly difficult with such small test specimens. This study seeks to characterize elastic-plastic properties of nanocrystalline Al-5083 through nanoindentation and related data analysis techniques. By using nanoindentation, accurate predictions of the elastic modulus and hardness of the alloy were attained. Also, the employed data analysis model provided reasonable estimates of the plastic properties (strain-hardening exponent and yield stress) lending credibility to this procedure as an accurate, full mechanical characterization method.
Analysis of operator splitting errors for near-limit flame simulations
NASA Astrophysics Data System (ADS)
Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.
2017-04-01
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.
NASA Astrophysics Data System (ADS)
Mirniaharikandehei, Seyedehnafiseh; Patil, Omkar; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin
2017-03-01
Accurately assessing the potential benefit of chemotherapy to cancer patients is an important prerequisite to developing precision medicine in cancer treatment. The previous study has shown that total psoas area (TPA) measured on preoperative cross-section CT image might be a good image marker to predict long-term outcome of pancreatic cancer patients after surgery. However, accurate and automated segmentation of TPA from the CT image is difficult due to the fuzzy boundary or connection of TPA to other muscle areas. In this study, we developed a new interactive computer-aided detection (ICAD) scheme aiming to segment TPA from the abdominal CT images more accurately and assess the feasibility of using this new quantitative image marker to predict the benefit of ovarian cancer patients receiving Bevacizumab-based chemotherapy. ICAD scheme was applied to identify a CT image slice of interest, which is located at the level of L3 (vertebral spines). The cross-sections of the right and left TPA are segmented using a set of adaptively adjusted boundary conditions. TPA is then quantitatively measured. In addition, recent studies have investigated that muscle radiation attenuation which reflects fat deposition in the tissue might be a good image feature for predicting the survival rate of cancer patients. The scheme and TPA measurement task were applied to a large national clinical trial database involving 1,247 ovarian cancer patients. By comparing with manual segmentation results, we found that ICAD scheme could yield higher accuracy and consistency for this task. Using a new ICAD scheme can provide clinical researchers a useful tool to more efficiently and accurately extract TPA as well as muscle radiation attenuation as new image makers, and allow them to investigate the discriminatory power of it to predict progression-free survival and/or overall survival of the cancer patients before and after taking chemotherapy.
Analysis of operator splitting errors for near-limit flame simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhen; Zhou, Hua; Li, Shan
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction ofmore » ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.« less
Modeling the irradiance and temperature rependence of photovoltaic modules in PVsyst
Sauer, Kenneth J.; Roessler, Thomas; Hansen, Clifford W.
2014-11-10
In order to reliably simulate the energy yield of photovoltaic (PV) systems, it is necessary to have an accurate model of how the PV modules perform with respect to irradiance and cell temperature. Building on previous work that addresses the irradiance dependence, two approaches to fit the temperature dependence of module power in PVsyst have been developed and are applied here to recent multi-irradiance and -temperature data for a standard Yingli Solar PV module type. The results demonstrate that it is possible to match the measured irradiance and temperature dependence of PV modules in PVsyst. As a result, improvements inmore » energy yield prediction using the optimized models relative to the PVsyst standard model are considered significant for decisions about project financing.« less
Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M.; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V.; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L.; Bilello, Michel; O'Rourke, Donald M.; Davatzikos, Christos
2016-01-01
Background MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). Methods One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Results Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. Conclusions By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood–brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. PMID:26188015
Søreide, K; Thorsen, K; Søreide, J A
2015-02-01
Mortality prediction models for patients with perforated peptic ulcer (PPU) have not yielded consistent or highly accurate results. Given the complex nature of this disease, which has many non-linear associations with outcomes, we explored artificial neural networks (ANNs) to predict the complex interactions between the risk factors of PPU and death among patients with this condition. ANN modelling using a standard feed-forward, back-propagation neural network with three layers (i.e., an input layer, a hidden layer and an output layer) was used to predict the 30-day mortality of consecutive patients from a population-based cohort undergoing surgery for PPU. A receiver-operating characteristic (ROC) analysis was used to assess model accuracy. Of the 172 patients, 168 had their data included in the model; the data of 117 (70%) were used for the training set, and the data of 51 (39%) were used for the test set. The accuracy, as evaluated by area under the ROC curve (AUC), was best for an inclusive, multifactorial ANN model (AUC 0.90, 95% CIs 0.85-0.95; p < 0.001). This model outperformed standard predictive scores, including Boey and PULP. The importance of each variable decreased as the number of factors included in the ANN model increased. The prediction of death was most accurate when using an ANN model with several univariate influences on the outcome. This finding demonstrates that PPU is a highly complex disease for which clinical prognoses are likely difficult. The incorporation of computerised learning systems might enhance clinical judgments to improve decision making and outcome prediction.
NASA Astrophysics Data System (ADS)
Liang, Zhongmin; Li, Yujie; Hu, Yiming; Li, Binquan; Wang, Jun
2017-06-01
Accurate and reliable long-term forecasting plays an important role in water resources management and utilization. In this paper, a hybrid model called SVR-HUP is presented to predict long-term runoff and quantify the prediction uncertainty. The model is created based on three steps. First, appropriate predictors are selected according to the correlations between meteorological factors and runoff. Second, a support vector regression (SVR) model is structured and optimized based on the LibSVM toolbox and a genetic algorithm. Finally, using forecasted and observed runoff, a hydrologic uncertainty processor (HUP) based on a Bayesian framework is used to estimate the posterior probability distribution of the simulated values, and the associated uncertainty of prediction was quantitatively analyzed. Six precision evaluation indexes, including the correlation coefficient (CC), relative root mean square error (RRMSE), relative error (RE), mean absolute percentage error (MAPE), Nash-Sutcliffe efficiency (NSE), and qualification rate (QR), are used to measure the prediction accuracy. As a case study, the proposed approach is applied in the Han River basin, South Central China. Three types of SVR models are established to forecast the monthly, flood season and annual runoff volumes. The results indicate that SVR yields satisfactory accuracy and reliability at all three scales. In addition, the results suggest that the HUP cannot only quantify the uncertainty of prediction based on a confidence interval but also provide a more accurate single value prediction than the initial SVR forecasting result. Thus, the SVR-HUP model provides an alternative method for long-term runoff forecasting.
A theoretical study on pure bending of hexagonal close-packed metal sheet
NASA Astrophysics Data System (ADS)
Mehrabi, Hamed; Yang, Chunhui
2018-05-01
Hexagonal close-packed (HCP) metals have quite different mechanical behaviours in comparison to conventional cubic metals such as steels and aluminum alloys [1, 2]. They exhibit a significant tension-compression asymmetry in initial yielding and subsequent plastic hardening. The reason for this unique behaviour can be attributed to their limited symmetric crystal structure, which leads to twining deformation [3-5]. This unique behaviour strongly influences sheet metal forming of such metals, especially for roll forming, in which the bending is dominant. Hence, it is crucial to represent constitutive relations of HCP metals for accurate estimation of bending moment-curvature behaviours. In this paper, an analytical model for asymmetric elastoplastic pure bending with an application of Cazacu-Barlat asymmetric yield function [6] is presented. This yield function considers the asymmetrical tension-compression behaviour of HCP metals by using second and third invariants of the stress deviator tensor and a specified constant, which can be expressed in terms of uniaxial yield stresses in tension and compression. As a case study, the analytical model is applied to predict the moment-curvature behaviours of AZ31B magnesium alloy sheets under uniaxial loading condition. Furthermore, the analytical model is implemented as a user-defined material through the UMAT interface in Abaqus [7, 8] for conducting pure bending simulations. The results show that the analytical model can reasonably capture the asymmetric tension-compression behaviour of the magnesium alloy. The predicted moment-curvature behaviour has good agreement with the experimental results. Furthermore, numerical results show a better accuracy by the application of the Cazacu-Barlat yield function than those using the von-Mises yield function, which are more conservative than analytical results.
Jeong, Seok Hoo; Yoon, Hyun Hwa; Kim, Eui Joo; Kim, Yoon Jae; Kim, Yeon Suk; Cho, Jae Hee
2017-01-01
Abstract Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is the accurate diagnostic method for pancreatic masses and its accuracy is affected by various FNA methods and EUS equipment. Therefore, we aimed to elucidate the instrumental and methodologic factors for determining the diagnostic yield of EUS-FNA for pancreatic solid masses without an on-site cytopathology evaluation. We retrospectively reviewed the medical records of 260 patients (265 pancreatic solid masses) who underwent EUS-FNA. We compared historical conventional EUS groups with high-resolution imaging devices and finally analyzed various factors affecting EUS-FNA accuracy. In total, 265 pancreatic solid masses of 260 patients were included in this study. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of EUS-FNA for pancreatic solid masses without on-site cytopathology evaluation were 83.4%, 81.8%, 100.0%, 100.0%, and 34.3%, respectively. In comparison with conventional image group, high-resolution image group showed the increased accuracy, sensitivity and specificity of EUS-FNA (71.3% vs 92.7%, 68.9% vs 91.9%, and 100% vs 100%, respectively). On the multivariate analysis with various instrumental and methodologic factors, high-resolution imaging (P = 0.040, odds ratio = 3.28) and 3 or more needle passes (P = 0.039, odds ratio = 2.41) were important factors affecting diagnostic yield of pancreatic solid masses. High-resolution imaging and 3 or more passes were the most significant factors influencing diagnostic yield of EUS-FNA in patients with pancreatic solid masses without an on-site cytopathologist. PMID:28079803
Product component genealogy modeling and field-failure prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Caleb; Hong, Yili; Meeker, William Q.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Product component genealogy modeling and field-failure prediction
King, Caleb; Hong, Yili; Meeker, William Q.
2016-04-13
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Laine, Elodie; Carbone, Alessandra
2015-01-01
Protein-protein interactions (PPIs) are essential to all biological processes and they represent increasingly important therapeutic targets. Here, we present a new method for accurately predicting protein-protein interfaces, understanding their properties, origins and binding to multiple partners. Contrary to machine learning approaches, our method combines in a rational and very straightforward way three sequence- and structure-based descriptors of protein residues: evolutionary conservation, physico-chemical properties and local geometry. The implemented strategy yields very precise predictions for a wide range of protein-protein interfaces and discriminates them from small-molecule binding sites. Beyond its predictive power, the approach permits to dissect interaction surfaces and unravel their complexity. We show how the analysis of the predicted patches can foster new strategies for PPIs modulation and interaction surface redesign. The approach is implemented in JET2, an automated tool based on the Joint Evolutionary Trees (JET) method for sequence-based protein interface prediction. JET2 is freely available at www.lcqb.upmc.fr/JET2. PMID:26690684
Advanced model for the prediction of the neutron-rich fission product yields
NASA Astrophysics Data System (ADS)
Rubchenya, V. A.; Gorelov, D.; Jokinen, A.; Penttilä, H.; Äystö, J.
2013-12-01
The consistent models for the description of the independent fission product formation cross sections in the spontaneous fission and in the neutron and proton induced fission at the energies up to 100 MeV is developed. This model is a combination of new version of the two-component exciton model and a time-dependent statistical model for fusion-fission process with inclusion of dynamical effects for accurate calculations of nucleon composition and excitation energy of the fissioning nucleus at the scission point. For each member of the compound nucleus ensemble at the scission point, the primary fission fragment characteristics: kinetic and excitation energies and their yields are calculated using the scission-point fission model with inclusion of the nuclear shell and pairing effects, and multimodal approach. The charge distribution of the primary fragment isobaric chains was considered as a result of the frozen quantal fluctuations of the isovector nuclear matter density at the scission point with the finite neck radius. Model parameters were obtained from the comparison of the predicted independent product fission yields with the experimental results and with the neutron-rich fission product data measured with a Penning trap at the Accelerator Laboratory of the University of Jyväskylä (JYFLTRAP).
Cao, Xueren; Luo, Yong; Zhou, Yilin; Fan, Jieru; Xu, Xiangming; West, Jonathan S.; Duan, Xiayu; Cheng, Dengfa
2015-01-01
To determine the influence of plant density and powdery mildew infection of winter wheat and to predict grain yield, hyperspectral canopy reflectance of winter wheat was measured for two plant densities at Feekes growth stage (GS) 10.5.3, 10.5.4, and 11.1 in the 2009–2010 and 2010–2011 seasons. Reflectance in near infrared (NIR) regions was significantly correlated with disease index at GS 10.5.3, 10.5.4, and 11.1 at two plant densities in both seasons. For the two plant densities, the area of the red edge peak (Σdr 680–760 nm), difference vegetation index (DVI), and triangular vegetation index (TVI) were significantly correlated negatively with disease index at three GSs in two seasons. Compared with other parameters Σdr 680–760 nm was the most sensitive parameter for detecting powdery mildew. Linear regression models relating mildew severity to Σdr 680–760 nm were constructed at three GSs in two seasons for the two plant densities, demonstrating no significant difference in the slope estimates between the two plant densities at three GSs. Σdr 680–760 nm was correlated with grain yield at three GSs in two seasons. The accuracies of partial least square regression (PLSR) models were consistently higher than those of models based on Σdr 680760 nm for disease index and grain yield. PLSR can, therefore, provide more accurate estimation of disease index of wheat powdery mildew and grain yield using canopy reflectance. PMID:25815468
Vapor Wall Deposition in Chambers: Theoretical Considerations
NASA Astrophysics Data System (ADS)
McVay, R.; Cappa, C. D.; Seinfeld, J.
2014-12-01
In order to constrain the effects of vapor wall deposition on measured secondary organic aerosol (SOA) yields in laboratory chambers, Zhang et al. (2014) varied the seed aerosol surface area in toluene oxidation and observed a clear increase in the SOA yield with increasing seed surface area. Using a coupled vapor-particle dynamics model, we examine the extent to which this increase is the result of vapor wall deposition versus kinetic limitations arising from imperfect accommodation of organic species into the particle phase. We show that a seed surface area dependence of the SOA yield is present only when condensation of vapors onto particles is kinetically limited. The existence of kinetic limitation can be predicted by comparing the characteristic timescales of gas-phase reaction, vapor wall deposition, and gas-particle equilibration. The gas-particle equilibration timescale depends on the gas-particle accommodation coefficient αp. Regardless of the extent of kinetic limitation, vapor wall deposition depresses the SOA yield from that in its absence since vapor molecules that might otherwise condense on particles deposit on the walls. To accurately extrapolate chamber-derived yields to atmospheric conditions, both vapor wall deposition and kinetic limitations must be taken into account.
Training set selection for the prediction of essential genes.
Cheng, Jian; Xu, Zhao; Wu, Wenwu; Zhao, Li; Li, Xiangchen; Liu, Yanlin; Tao, Shiheng
2014-01-01
Various computational models have been developed to transfer annotations of gene essentiality between organisms. However, despite the increasing number of microorganisms with well-characterized sets of essential genes, selection of appropriate training sets for predicting the essential genes of poorly-studied or newly sequenced organisms remains challenging. In this study, a machine learning approach was applied reciprocally to predict the essential genes in 21 microorganisms. Results showed that training set selection greatly influenced predictive accuracy. We determined four criteria for training set selection: (1) essential genes in the selected training set should be reliable; (2) the growth conditions in which essential genes are defined should be consistent in training and prediction sets; (3) species used as training set should be closely related to the target organism; and (4) organisms used as training and prediction sets should exhibit similar phenotypes or lifestyles. We then analyzed the performance of an incomplete training set and an integrated training set with multiple organisms. We found that the size of the training set should be at least 10% of the total genes to yield accurate predictions. Additionally, the integrated training sets exhibited remarkable increase in stability and accuracy compared with single sets. Finally, we compared the performance of the integrated training sets with the four criteria and with random selection. The results revealed that a rational selection of training sets based on our criteria yields better performance than random selection. Thus, our results provide empirical guidance on training set selection for the identification of essential genes on a genome-wide scale.
Calibrating genomic and allelic coverage bias in single-cell sequencing.
Zhang, Cheng-Zhong; Adalsteinsson, Viktor A; Francis, Joshua; Cornils, Hauke; Jung, Joonil; Maire, Cecile; Ligon, Keith L; Meyerson, Matthew; Love, J Christopher
2015-04-16
Artifacts introduced in whole-genome amplification (WGA) make it difficult to derive accurate genomic information from single-cell genomes and require different analytical strategies from bulk genome analysis. Here, we describe statistical methods to quantitatively assess the amplification bias resulting from whole-genome amplification of single-cell genomic DNA. Analysis of single-cell DNA libraries generated by different technologies revealed universal features of the genome coverage bias predominantly generated at the amplicon level (1-10 kb). The magnitude of coverage bias can be accurately calibrated from low-pass sequencing (∼0.1 × ) to predict the depth-of-coverage yield of single-cell DNA libraries sequenced at arbitrary depths. We further provide a benchmark comparison of single-cell libraries generated by multi-strand displacement amplification (MDA) and multiple annealing and looping-based amplification cycles (MALBAC). Finally, we develop statistical models to calibrate allelic bias in single-cell whole-genome amplification and demonstrate a census-based strategy for efficient and accurate variant detection from low-input biopsy samples.
Calibrating genomic and allelic coverage bias in single-cell sequencing
Francis, Joshua; Cornils, Hauke; Jung, Joonil; Maire, Cecile; Ligon, Keith L.; Meyerson, Matthew; Love, J. Christopher
2016-01-01
Artifacts introduced in whole-genome amplification (WGA) make it difficult to derive accurate genomic information from single-cell genomes and require different analytical strategies from bulk genome analysis. Here, we describe statistical methods to quantitatively assess the amplification bias resulting from whole-genome amplification of single-cell genomic DNA. Analysis of single-cell DNA libraries generated by different technologies revealed universal features of the genome coverage bias predominantly generated at the amplicon level (1–10 kb). The magnitude of coverage bias can be accurately calibrated from low-pass sequencing (~0.1 ×) to predict the depth-of-coverage yield of single-cell DNA libraries sequenced at arbitrary depths. We further provide a benchmark comparison of single-cell libraries generated by multi-strand displacement amplification (MDA) and multiple annealing and looping-based amplification cycles (MALBAC). Finally, we develop statistical models to calibrate allelic bias in single-cell whole-genome amplification and demonstrate a census-based strategy for efficient and accurate variant detection from low-input biopsy samples. PMID:25879913
Baumeier, Björn; Andrienko, Denis; Rohlfing, Michael
2012-08-14
Excited states of donor-acceptor dimers are studied using many-body Green's functions theory within the GW approximation and the Bethe-Salpeter equation. For a series of prototypical small-molecule based pairs, this method predicts energies of local Frenkel and intermolecular charge-transfer excitations with the accuracy of tens of meV. Application to larger systems is possible and allowed us to analyze energy levels and binding energies of excitons in representative dimers of dicyanovinyl-substituted quarterthiophene and fullerene, a donor-acceptor pair used in state of the art organic solar cells. In these dimers, the transition from Frenkel to charge transfer excitons is endothermic and the binding energy of charge transfer excitons is still of the order of 1.5-2 eV. Hence, even such an accurate dimer-based description does not yield internal energetics favorable for the generation of free charges either by thermal energy or an external electric field. These results confirm that, for qualitative predictions of solar cell functionality, accounting for the explicit molecular environment is as important as the accurate knowledge of internal dimer energies.
Predicting age from cortical structure across the lifespan.
Madan, Christopher R; Kensinger, Elizabeth A
2018-03-01
Despite interindividual differences in cortical structure, cross-sectional and longitudinal studies have demonstrated a large degree of population-level consistency in age-related differences in brain morphology. This study assessed how accurately an individual's age could be predicted by estimates of cortical morphology, comparing a variety of structural measures, including thickness, gyrification and fractal dimensionality. Structural measures were calculated across up to seven different parcellation approaches, ranging from one region to 1000 regions. The age prediction framework was trained using morphological measures obtained from T1-weighted MRI volumes collected from multiple sites, yielding a training dataset of 1056 healthy adults, aged 18-97. Age predictions were calculated using a machine-learning approach that incorporated nonlinear differences over the lifespan. In two independent, held-out test samples, age predictions had a median error of 6-7 years. Age predictions were best when using a combination of cortical metrics, both thickness and fractal dimensionality. Overall, the results reveal that age-related differences in brain structure are systematic enough to enable reliable age prediction based on metrics of cortical morphology. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Single-step methods for predicting orbital motion considering its periodic components
NASA Astrophysics Data System (ADS)
Lavrov, K. N.
1989-01-01
Modern numerical methods for integration of ordinary differential equations can provide accurate and universal solutions to celestial mechanics problems. The implicit single sequence algorithms of Everhart and multiple step computational schemes using a priori information on periodic components can be combined to construct implicit single sequence algorithms which combine their advantages. The construction and analysis of the properties of such algorithms are studied, utilizing trigonometric approximation of the solutions of differential equations containing periodic components. The algorithms require 10 percent more machine memory than the Everhart algorithms, but are twice as fast, and yield short term predictions valid for five to ten orbits with good accuracy and five to six times faster than algorithms using other methods.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
NASA Technical Reports Server (NTRS)
Green, S.; Cochrane, D. L.; Truhlar, D. G.
1986-01-01
The utility of the energy-corrected sudden (ECS) scaling method is evaluated on the basis of how accurately it predicts the entire matrix of state-to-state rate constants, when the fundamental rate constants are independently known. It is shown for the case of Ar-CO collisions at 500 K that when a critical impact parameter is about 1.75-2.0 A, the ECS method yields excellent excited state rates on the average and has an rms error of less than 20 percent.
NASA Astrophysics Data System (ADS)
Costanzi, Stefano; Tikhonova, Irina G.; Harden, T. Kendall; Jacobson, Kenneth A.
2009-11-01
Accurate in silico models for the quantitative prediction of the activity of G protein-coupled receptor (GPCR) ligands would greatly facilitate the process of drug discovery and development. Several methodologies have been developed based on the properties of the ligands, the direct study of the receptor-ligand interactions, or a combination of both approaches. Ligand-based three-dimensional quantitative structure-activity relationships (3D-QSAR) techniques, not requiring knowledge of the receptor structure, have been historically the first to be applied to the prediction of the activity of GPCR ligands. They are generally endowed with robustness and good ranking ability; however they are highly dependent on training sets. Structure-based techniques generally do not provide the level of accuracy necessary to yield meaningful rankings when applied to GPCR homology models. However, they are essentially independent from training sets and have a sufficient level of accuracy to allow an effective discrimination between binders and nonbinders, thus qualifying as viable lead discovery tools. The combination of ligand and structure-based methodologies in the form of receptor-based 3D-QSAR and ligand and structure-based consensus models results in robust and accurate quantitative predictions. The contribution of the structure-based component to these combined approaches is expected to become more substantial and effective in the future, as more sophisticated scoring functions are developed and more detailed structural information on GPCRs is gathered.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia.
Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia
Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600
NASA Astrophysics Data System (ADS)
McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.
2013-12-01
Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.
Evaluation of a non-point source pollution model, AnnAGNPS, in a tropical watershed
Polyakov, V.; Fares, A.; Kubo, D.; Jacobi, J.; Smith, C.
2007-01-01
Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized Non-Point Source Pollution Model), in simulating runoff and soil erosion in a 48 km2 watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R2 = 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R2 = 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R2 = 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0-1 t ha-1 y-1), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha-1 y-1. Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify "hot spots" on the landscape. ?? 2007 Elsevier Ltd. All rights reserved.
Hatfield, Laura A.; Gutreuter, Steve; Boogaard, Michael A.; Carlin, Bradley P.
2011-01-01
Estimation of extreme quantal-response statistics, such as the concentration required to kill 99.9% of test subjects (LC99.9), remains a challenge in the presence of multiple covariates and complex study designs. Accurate and precise estimates of the LC99.9 for mixtures of toxicants are critical to ongoing control of a parasitic invasive species, the sea lamprey, in the Laurentian Great Lakes of North America. The toxicity of those chemicals is affected by local and temporal variations in water chemistry, which must be incorporated into the modeling. We develop multilevel empirical Bayes models for data from multiple laboratory studies. Our approach yields more accurate and precise estimation of the LC99.9 compared to alternative models considered. This study demonstrates that properly incorporating hierarchical structure in laboratory data yields better estimates of LC99.9 stream treatment values that are critical to larvae control in the field. In addition, out-of-sample prediction of the results of in situ tests reveals the presence of a latent seasonal effect not manifest in the laboratory studies, suggesting avenues for future study and illustrating the importance of dual consideration of both experimental and observational data.
The influence of collective neutrino oscillations on a supernova r process
NASA Astrophysics Data System (ADS)
Duan, Huaiyu; Friedland, Alexander; McLaughlin, Gail C.; Surman, Rebecca
2011-03-01
Recently, it has been demonstrated that neutrinos in a supernova oscillate collectively. This process occurs much deeper than the conventional matter-induced Mikheyev-Smirnov-Wolfenstein effect and hence may have an impact on nucleosynthesis. In this paper we explore the effects of collective neutrino oscillations on the r-process, using representative late-time neutrino spectra and outflow models. We find that accurate modeling of the collective oscillations is essential for this analysis. As an illustration, the often-used 'single-angle' approximation makes grossly inaccurate predictions for the yields in our setup. With the proper multiangle treatment, the effect of the oscillations is found to be less dramatic, but still significant. Since the oscillation patterns are sensitive to the details of the emitted fluxes and the sign of the neutrino mass hierarchy, so are the r-process yields. The magnitude of the effect also depends sensitively on the astrophysical conditions—in particular on the interplay between the time when nuclei begin to exist in significant numbers and the time when the collective oscillation begins. A more definitive understanding of the astrophysical conditions, and accurate modeling of the collective oscillations for those conditions, is necessary.
NASA Astrophysics Data System (ADS)
Smith, D. P.; Kvitek, R.; Quan, S.; Iampietro, P.; Paddock, E.; Richmond, S. F.; Gomez, K.; Aiello, I. W.; Consulo, P.
2009-12-01
Models of watershed sediment yield are complicated by spatial and temporal variability of geologic substrate, land cover, and precipitation parameters. Episodic events such as ENSO cycles and severe wildfire are frequent enough to matter in the long-term average yield, and they can produce short-lived, extreme geomorphic responses. The sediment yield from extreme events is difficult to accurately capture because of the obvious dangers associated with field measurements during flood conditions, but it is critical to include extreme values for developing realistic models of rainfall-sediment yield relations, and for calculating long term average denudation rates. Dammed rivers provide a time-honored natural laboratory for quantifying average annual sediment yield and extreme-event sediment yield. While lead-line surveys of the past provided crude estimates of reservoir sediment trapping, recent advances in geospatial technology now provide unprecedented opportunities to improve volume change measurements. High-precision digital elevation models surveyed on an annual basis, or before-and-after specific rainfall-runoff events can be used to quantify relations between rainfall and sediment yield as a function of landscape parameters, including spatially explicit fire intensity. The Basin-Complex Fire of June and July 2008 resulted in moderate to severe burns in the 114 km^2 portion of the Carmel River watershed above Los Padres Dam. The US Geological Survey produced a debris flow probability/volume model for the region indicating that the reservoir could lose considerable capacity if intense enough precipitation occurred in the 2009-10 winter. Loss of Los Padres reservoir capacity has implications for endangered steelhead and red-legged frogs, and groundwater on municipal water supply. In anticipation of potentially catastrophic erosion, we produced an accurate volume calculation of the Los Padres reservoir in fall 2009, and locally monitored hillslope and fluvial processes during winter months. The pre-runoff reservoir volume was developed by collecting and merging sonar and LiDAR data from a small research skiff equipped with a high-precision positioning and attitude-correcting system. The terrestrial LiDAR data were augmented with shore-based total station positioning. Watershed monitoring included benchmarked serial stream surveys and semi-quantitative assessment of a variety of near-channel colluvial processes. Rainfall in the 2009-10 water year was not intense enough to trigger widespread debris flows of slope failure in the burned watershed, but dry ravel was apparently accelerated. The geomorphic analysis showed that sediment yield was not significantly higher during this low-rainfall year, despite the wide-spread presence of very steep, fire-impacted slopes. Because there was little to no increase in sediment yield this year, we have postponed our second reservoir survey. A predicted ENSO event that might bring very intense rains to the watershed is currently predicted for winter 2009-10.
Supercritical water oxidation of quinazoline: Reaction kinetics and modeling.
Gong, Yanmeng; Guo, Yang; Wang, Shuzhong; Song, Wenhan; Xu, Donghai
2017-03-01
This paper presents a first quantitative kinetic model for supercritical water oxidation (SCWO) of quinazoline that describes the formation and interconversion of intermediates and final products at 673-873 K. The set of 11 reaction pathways for phenol, pyrimidine, naphthalene, NH 3 , etc, involved in the simplified reaction network proved sufficient for fitting the experimental results satisfactorily. We validated the model prediction ability on CO 2 yields at initial quinazoline loading not used in the parameter estimation. Reaction rate analysis and sensitivity analysis indicate that nearly all reactions reach their thermodynamic equilibrium within 300 s. The pyrimidine yielding from quinazoline is the dominant ring-opening pathway and provides a significant contribution to CO 2 formation. Low sensitivity of NH 3 decomposition rate to concentration confirms its refractory nature in SCWO. Nitrogen content in liquid products decreases whereas that in gaseous phase increases as reaction time prolonged. The nitrogen predicted by the model in gaseous phase combined with the experimental nitrogen in liquid products gives an accurate nitrogen balance of conversion process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less
Tan, Kok Tat; Lee, Keat Teong; Mohamed, Abdul Rahman
2010-02-01
In this study, fatty acid methyl esters (FAME) have been successfully produced from transesterification reaction between triglycerides and methyl acetate, instead of alcohol. In this non-catalytic supercritical methyl acetate (SCMA) technology, triacetin which is a valuable biodiesel additive is produced as side product rather than glycerol, which has lower commercial value. Besides, the properties of the biodiesel (FAME and triacetin) were found to be superior compared to those produced from conventional catalytic reactions (FAME only). In this study, the effects of various important parameters on the yield of biodiesel were optimized by utilizing Response Surface Methodology (RSM) analysis. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum yield of biodiesel. The optimum conditions were found to be 399 degrees C for reaction temperature, 30 mol/mol of methyl acetate to oil molar ratio and reaction time of 59 min to achieve 97.6% biodiesel yield.
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.
Sills, Deborah L; Gossett, James M
2012-04-01
Fourier transform infrared, attenuated total reflectance (FTIR-ATR) spectroscopy, combined with partial least squares (PLS) regression, accurately predicted solubilization of plant cell wall constituents and NaOH consumption through pretreatment, and overall sugar productions from combined pretreatment and enzymatic hydrolysis. PLS regression models were constructed by correlating FTIR spectra of six raw biomasses (two switchgrass cultivars, big bluestem grass, a low-impact, high-diversity mixture of prairie biomasses, mixed hardwood, and corn stover), plus alkali loading in pretreatment, to nine dependent variables: glucose, xylose, lignin, and total solids solubilized in pretreatment; NaOH consumed in pretreatment; and overall glucose and xylose conversions and yields from combined pretreatment and enzymatic hydrolysis. PLS models predicted the dependent variables with the following values of coefficient of determination for cross-validation (Q²): 0.86 for glucose, 0.90 for xylose, 0.79 for lignin, and 0.85 for total solids solubilized in pretreatment; 0.83 for alkali consumption; 0.93 for glucose conversion, 0.94 for xylose conversion, and 0.88 for glucose and xylose yields. The sugar yield models are noteworthy for their ability to predict overall saccharification through combined pretreatment and enzymatic hydrolysis per mass dry untreated solids without a priori knowledge of the composition of solids. All wavenumbers with significant variable-important-for-projection (VIP) scores have been attributed to chemical features of lignocellulose, demonstrating the models were based on real chemical information. These models suggest that PLS regression can be applied to FTIR-ATR spectra of raw biomasses to rapidly predict effects of pretreatment on solids and on subsequent enzymatic hydrolysis. Copyright © 2011 Wiley Periodicals, Inc.
Assessment of the Applicability of Hertzian Contact Theory to Edge-Loaded Prosthetic Hip Bearings
Sanders, Anthony P.; Brannon, Rebecca M.
2011-01-01
The components of prosthetic hip bearings may experience in-vivo subluxation and edge loading on the acetabular socket as a result of joint laxity, causing abnormally high, damaging contact stresses. In this research, edge-loaded contact of prosthetic hips is examined analytically and experimentally in the most commonly used categories of material pairs. In edge-loaded ceramic-on-ceramic hips, Hertzian contact theory yields accurate (conservatively, <10% error) predictions of the contact dimensions. Moreover, Hertzian theory successfully captures slope and curvature trends in the dependence of contact patch geometry on the applied load. In an edge-loaded ceramic-on-metal pair, a similar degree of accuracy is observed in the contact patch length; however, the contact width is less accurately predicted due to the onset of subsurface plasticity, which is predicted for loads >400 N. Hertzian contact theory is shown to be ill-suited to edge-loaded ceramic-on-polyethylene pairs due to polyethylene’s nonlinear material behavior. This work elucidates the methods and the accuracy of applying classical contact theory to edge-loaded hip bearings. The results help to define the applicability of Hertzian theory to the design of new components and materials to better resist severe edge loading contact stresses. PMID:21962465
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.
Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques
2016-10-01
Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.
NASA Astrophysics Data System (ADS)
Xu, Shiluo; Niu, Ruiqing
2018-02-01
Every year, landslides pose huge threats to thousands of people in China, especially those in the Three Gorges area. It is thus necessary to establish an early warning system to help prevent property damage and save peoples' lives. Most of the landslide displacement prediction models that have been proposed are static models. However, landslides are dynamic systems. In this paper, the total accumulative displacement of the Baijiabao landslide is divided into trend and periodic components using empirical mode decomposition. The trend component is predicted using an S-curve estimation, and the total periodic component is predicted using a long short-term memory neural network (LSTM). LSTM is a dynamic model that can remember historical information and apply it to the current output. Six triggering factors are chosen to predict the periodic term using the Pearson cross-correlation coefficient and mutual information. These factors include the cumulative precipitation during the previous month, the cumulative precipitation during a two-month period, the reservoir level during the current month, the change in the reservoir level during the previous month, the cumulative increment of the reservoir level during the current month, and the cumulative displacement during the previous month. When using one-step-ahead prediction, LSTM yields a root mean squared error (RMSE) value of 6.112 mm, while the support vector machine for regression (SVR) and the back-propagation neural network (BP) yield values of 10.686 mm and 8.237 mm, respectively. Meanwhile, the Elman network (Elman) yields an RMSE value of 6.579 mm. In addition, when using multi-step-ahead prediction, LSTM obtains an RMSE value of 8.648 mm, while SVR, BP and the Elman network obtains RSME values of 13.418 mm, 13.014 mm, and 13.370 mm. The predicted results indicate that, to some extent, the dynamic model (LSTM) achieves results that are more accurate than those of the static models (i.e., SVR and BP). LSTM even displays better performance than the Elman network, which is also a dynamic method.
Wolfe, Marnin D; Kulakow, Peter; Rabbi, Ismail Y; Jannink, Jean-Luc
2016-08-31
In clonally propagated crops, non-additive genetic effects can be effectively exploited by the identification of superior genetic individuals as varieties. Cassava (Manihot esculenta Crantz) is a clonally propagated staple food crop that feeds hundreds of millions. We quantified the amount and nature of non-additive genetic variation for three key traits in a breeding population of cassava from sub-Saharan Africa using additive and non-additive genome-wide marker-based relationship matrices. We then assessed the accuracy of genomic prediction for total (additive plus non-additive) genetic value. We confirmed previous findings based on diallel populations, that non-additive genetic variation is significant for key cassava traits. Specifically, we found that dominance is particularly important for root yield and epistasis contributes strongly to variation in CMD resistance. Further, we showed that total genetic value predicted observed phenotypes more accurately than additive only models for root yield but not for dry matter content, which is mostly additive or for CMD resistance, which has high narrow-sense heritability. We address the implication of these results for cassava breeding and put our work in the context of previous results in cassava, and other plant and animal species. Copyright © 2016 Author et al.
Mukherjee, Prabuddha; Lim, Sung Jun; Wrobel, Tomasz P; Bhargava, Rohit; Smith, Andrew M
2016-08-31
Nanocrystals composed of mixed chemical domains have diverse properties that are driving their integration in next-generation electronics, light sources, and biosensors. However, the precise spatial distribution of elements within these particles is difficult to measure and control, yet profoundly impacts their quality and performance. Here we synthesized a unique series of 42 different quantum dot nanocrystals, composed of two chemical domains (CdS:CdSe), arranged in 7 alloy and (core)shell structural classes. Chemometric analyses of far-field Raman spectra accurately classified their internal structures from their vibrational signatures. These classifications provide direct insight into the elemental arrangement of the alloy as well as an independent prediction of fluorescence quantum yield. This nondestructive, rapid approach can be broadly applied to greatly enhance our capacity to measure, predict and monitor multicomponent nanomaterials for precise tuning of their structures and properties.
An elastic-plastic contact model for line contact structures
NASA Astrophysics Data System (ADS)
Zhu, Haibin; Zhao, Yingtao; He, Zhifeng; Zhang, Ruinan; Ma, Shaopeng
2018-06-01
Although numerical simulation tools are now very powerful, the development of analytical models is very important for the prediction of the mechanical behaviour of line contact structures for deeply understanding contact problems and engineering applications. For the line contact structures widely used in the engineering field, few analytical models are available for predicting the mechanical behaviour when the structures deform plastically, as the classic Hertz's theory would be invalid. Thus, the present study proposed an elastic-plastic model for line contact structures based on the understanding of the yield mechanism. A mathematical expression describing the global relationship between load history and contact width evolution of line contact structures was obtained. The proposed model was verified through an actual line contact test and a corresponding numerical simulation. The results confirmed that this model can be used to accurately predict the elastic-plastic mechanical behaviour of a line contact structure.
A Ffowcs Williams and Hawkings formulation for hydroacoustic analysis of propeller sheet cavitation
NASA Astrophysics Data System (ADS)
Testa, C.; Ianniello, S.; Salvatore, F.
2018-01-01
A novel hydroacoustic formulation for the prediction of tonal noise emitted by marine propellers in presence of unsteady sheet cavitation, is presented. The approach is based on the standard Ffowcs Williams and Hawkings equation and the use of transpiration (velocity and acceleration) terms, accounting for the time evolution of the vapour cavity attached on the blade surface. Drawbacks and potentialities of the method are tested on a marine propeller operating in a nonhomogeneous onset flow, by exploiting the hydrodynamic data from a potential-based panel method equipped with a sheet cavitation model and comparing the noise predictions with those carried out by an alternative numerical approach, documented in literature. It is shown that the proposed formulation yields a one-to-one correlation between emitted noise and sheet cavitation dynamics, carrying out accurate predictions in terms of noise magnitude and directivity.
HIV-1 protease cleavage site prediction based on two-stage feature selection method.
Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong
2013-03-01
Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.
Pomes, M.L.; Thurman, E.M.; Aga, D.S.; Goolsby, D.A.
1998-01-01
Triazine and chloroacetanilide concentrations in rainfall samples collected from a 23-state region of the United States were analyzed with microtiter-plate enzyme-linked immunosorbent assay (ELISA). Thirty-six percent of rainfall samples (2072 out of 5691) were confirmed using gas chromatography/mass spectrometry (GC/MS) to evaluate the operating performance of ELISA as a screening test. Comparison of ELISA to GC/MS results showed that the two ELISA methods accurately reported GC/MS results (m = 1), but with more variability evident with the triazine than with the chloroacetanilide ELISA. Bayes's rule, a standardized method to report the results of screening tests, indicated that the two ELISA methods yielded comparable predictive values (80%), but the triazine ELISA yielded a false- positive rate of 11.8% and the chloroacetanilide ELISA yielded a false- negative rate of 23.1%. The false-positive rate for the triazine ELISA may arise from cross reactivity with an unknown triazine or metabolite. The false-negative rate of the chloroacetanilide ELISA probably resulted from a combination of low sensitivity at the reporting limit of 0.15 ??g/L and a distribution characterized by 75% of the samples at or below the reporting limit of 0.15 ??g/L.Triazine and chloroacetanilide concentrations in rainfall samples collected from a 23-state region of the United States were analyzed with microtiter-plate enzyme-linked immunosorbent assay (ELISA). Thirty-six percent of rainfall samples (2072 out of 5691) were confirmed using gas chromatography/mass spectrometry (GC/MS) to evaluate the operating performance of ELISA as a screening test. Comparison of ELISA to GC/MS results showed that the two ELISA methods accurately reported GC/MS results (m = 1), but with more variability evident with the triazine than with the chloroacetanilide ELISA. Bayes's rule, a standardized method to report the results of screening tests, indicated that the two ELISA methods yielded comparable predictive values (80%), but the triazine ELISA yielded a false-positive rate of 11.8% and the chloroacetanilide ELISA yielded a false-negative rate of 23.1%. The false-positive rate for the triazine ELISA may arise from cross reactivity with an unknown triazine or metabolite. The false-negative rate of the chloroacetanilide ELISA probably resulted from a combination of low sensitivity at the reporting limit of 0.15 ??g/L and a distribution characterized by 75% of the samples at or below the reporting limit of 0.15 ??g/L.
Cave, Andrew J; Davey, Christina; Ahmadi, Elaheh; Drummond, Neil; Fuentes, Sonia; Kazemi-Bajestani, Seyyed Mohammad Reza; Sharpe, Heather; Taylor, Matt
2016-01-01
An accurate estimation of the prevalence of paediatric asthma in Alberta and elsewhere is hampered by uncertainty regarding disease definition and diagnosis. Electronic medical records (EMRs) provide a rich source of clinical data from primary-care practices that can be used in better understanding the occurrence of the disease. The Canadian Primary Care Sentinel Surveillance Network (CPCSSN) database includes cleaned data extracted from the EMRs of primary-care practitioners. The purpose of the study was to develop and validate a case definition of asthma in children 1–17 who consult family physicians, in order to provide primary-care estimates of childhood asthma in Alberta as accurately as possible. The validation involved the comparison of the application of a theoretical algorithm (to identify patients with asthma) to a physician review of records included in the CPCSSN database (to confirm an accurate diagnosis). The comparison yielded 87.4% sensitivity, 98.6% specificity and a positive and negative predictive value of 91.2% and 97.9%, respectively, in the age group 1–17 years. The algorithm was also run for ages 3–17 and 6–17 years, and was found to have comparable statistical values. Overall, the case definition and algorithm yielded strong sensitivity and specificity metrics and was found valid for use in research in CPCSSN primary-care practices. The use of the validated asthma algorithm may improve insight into the prevalence, diagnosis, and management of paediatric asthma in Alberta and Canada. PMID:27882997
Cave, Andrew J; Davey, Christina; Ahmadi, Elaheh; Drummond, Neil; Fuentes, Sonia; Kazemi-Bajestani, Seyyed Mohammad Reza; Sharpe, Heather; Taylor, Matt
2016-11-24
An accurate estimation of the prevalence of paediatric asthma in Alberta and elsewhere is hampered by uncertainty regarding disease definition and diagnosis. Electronic medical records (EMRs) provide a rich source of clinical data from primary-care practices that can be used in better understanding the occurrence of the disease. The Canadian Primary Care Sentinel Surveillance Network (CPCSSN) database includes cleaned data extracted from the EMRs of primary-care practitioners. The purpose of the study was to develop and validate a case definition of asthma in children 1-17 who consult family physicians, in order to provide primary-care estimates of childhood asthma in Alberta as accurately as possible. The validation involved the comparison of the application of a theoretical algorithm (to identify patients with asthma) to a physician review of records included in the CPCSSN database (to confirm an accurate diagnosis). The comparison yielded 87.4% sensitivity, 98.6% specificity and a positive and negative predictive value of 91.2% and 97.9%, respectively, in the age group 1-17 years. The algorithm was also run for ages 3-17 and 6-17 years, and was found to have comparable statistical values. Overall, the case definition and algorithm yielded strong sensitivity and specificity metrics and was found valid for use in research in CPCSSN primary-care practices. The use of the validated asthma algorithm may improve insight into the prevalence, diagnosis, and management of paediatric asthma in Alberta and Canada.
NASA Technical Reports Server (NTRS)
Stambler, Arielle H.; Inoshita, Karen E.; Roberts, Lily M.; Barbagallo, Claire E.; deGroh, Kim K.; Banks, Bruce A.
2011-01-01
The Materials International Space Station Experiment 2 (MISSE 2) Polymer Erosion and Contamination Experiment (PEACE) polymers were exposed to the environment of low Earth orbit (LEO) for 3.95 years from 2001 to 2005. There were 41 different PEACE polymers, which were flown on the exterior of the International Space Station (ISS) in order to determine their atomic oxygen erosion yields. In LEO, atomic oxygen is an environmental durability threat, particularly for long duration mission exposures. Although spaceflight experiments, such as the MISSE 2 PEACE experiment, are ideal for determining LEO environmental durability of spacecraft materials, ground-laboratory testing is often relied upon for durability evaluation and prediction. Unfortunately, significant differences exist between LEO atomic oxygen exposure and atomic oxygen exposure in ground-laboratory facilities. These differences include variations in species, energies, thermal exposures and radiation exposures, all of which may result in different reactions and erosion rates. In an effort to improve the accuracy of ground-based durability testing, ground-laboratory to in-space atomic oxygen correlation experiments have been conducted. In these tests, the atomic oxygen erosion yields of the PEACE polymers were determined relative to Kapton H using a radio-frequency (RF) plasma asher (operated on air). The asher erosion yields were compared to the MISSE 2 PEACE erosion yields to determine the correlation between erosion rates in the two environments. This paper provides a summary of the MISSE 2 PEACE experiment; it reviews the specific polymers tested as well as the techniques used to determine erosion yield in the asher, and it provides a correlation between the space and ground laboratory erosion yield values. Using the PEACE polymers asher to in-space erosion yield ratios will allow more accurate in-space materials performance predictions to be made based on plasma asher durability evaluation.
Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L; Bilello, Michel; O'Rourke, Donald M; Davatzikos, Christos
2016-03-01
MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine
Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques
2016-01-01
Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562
NASA Astrophysics Data System (ADS)
Fourtakas, G.; Rogers, B. D.
2016-06-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.
He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L
2018-04-01
SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.
Hwang, Hamish; Marsh, Ian; Doyle, Jason
2014-01-01
Background Acute cholecystitis is one of the most common diseases requiring emergency surgery. Ultrasonography is an accurate test for cholelithiasis but has a high false-negative rate for acute cholecystitis. The Murphy sign and laboratory tests performed independently are also not particularly accurate. This study was designed to review the accuracy of ultrasonography for diagnosing acute cholecystitis in a regional hospital. Methods We studied all emergency cholecystectomies performed over a 1-year period. All imaging studies were reviewed by a single radiologist, and all pathology was reviewed by a single pathologist. The reviewers were blinded to each other’s results. Results A total of 107 patients required an emergency cholecystectomy in the study period; 83 of them underwent ultrasonography. Interradiologist agreement was 92% for ultrasonography. For cholelithiasis, ultrasonography had 100% sensitivity, 18% specificity, 81% positive predictive value (PPV) and 100% negative predictive value (NPV). For acute cholecystitis, it had 54% sensitivity, 81% specificity, 85% PPV and 47% NPV. All patients had chronic cholecystitis and 67% had acute cholecystitis on histology. When combined with positive Murphy sign and elevated neutrophil count, an ultrasound showing cholelithiasis or acute cholecystitis yielded a sensitivity of 74%, specificity of 62%, PPV of 80% and NPV of 53% for the diagnosis of acute cholecystitis. Conclusion Ultrasonography alone has a high rate of false-negative studies for acute cholecystitis. However, a higher rate of accurate diagnosis can be achieved using a triad of positive Murphy sign, elevated neutrophil count and an ultrasound showing cholelithiasis or cholecystitis. PMID:24869607
Nonlinear dynamics of the magnetosphere and space weather
NASA Technical Reports Server (NTRS)
Sharma, A. Surjalal
1996-01-01
The solar wind-magnetosphere system exhibits coherence on the global scale and such behavior can arise from nonlinearity on the dynamics. The observational time series data were used together with phase space reconstruction techniques to analyze the magnetospheric dynamics. Analysis of the solar wind, auroral electrojet and Dst indices showed low dimensionality of the dynamics and accurate prediction can be made with an input/output model. The predictability of the magnetosphere in spite of the apparent complexity arises from its dynamical synchronism with the solar wind. The electrodynamic coupling between different regions of the magnetosphere yields its coherent, low dimensional behavior. The data from multiple satellites and ground stations can be used to develop a spatio-temporal model that identifies the coupling between different regions. These nonlinear dynamical models provide space weather forecasting capabilities.
Pan, Hongye; Zhang, Qing; Cui, Keke; Chen, Guoquan; Liu, Xuesong; Wang, Longhu
2017-05-01
The extraction of linarin from Flos chrysanthemi indici by ethanol was investigated. Two modeling techniques, response surface methodology and artificial neural network, were adopted to optimize the process parameters, such as, ethanol concentration, extraction period, extraction frequency, and solvent to material ratio. We showed that both methods provided good predictions, but artificial neural network provided a better and more accurate result. The optimum process parameters include, ethanol concentration of 74%, extraction period of 2 h, extraction three times, solvent to material ratio of 12 mL/g. The experiment yield of linarin was 90.5% that deviated less than 1.6% from that obtained by predicted result. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Metabolomic prediction of yield in hybrid rice.
Xu, Shizhong; Xu, Yang; Gong, Liang; Zhang, Qifa
2016-10-01
Rice (Oryza sativa) provides a staple food source for more than 50% of the world's population. An increase in yield can significantly contribute to global food security. Hybrid breeding can potentially help to meet this goal because hybrid rice often shows a considerable increase in yield when compared with pure-bred cultivars. We recently developed a marker-guided prediction method for hybrid yield and showed a substantial increase in yield through genomic hybrid breeding. We now have transcriptomic and metabolomic data as potential resources for prediction. Using six prediction methods, including least absolute shrinkage and selection operator (LASSO), best linear unbiased prediction (BLUP), stochastic search variable selection, partial least squares, and support vector machines using the radial basis function and polynomial kernel function, we found that the predictability of hybrid yield can be further increased using these omic data. LASSO and BLUP are the most efficient methods for yield prediction. For high heritability traits, genomic data remain the most efficient predictors. When metabolomic data are used, the predictability of hybrid yield is almost doubled compared with genomic prediction. Of the 21 945 potential hybrids derived from 210 recombinant inbred lines, selection of the top 10 hybrids predicted from metabolites would lead to a ~30% increase in yield. We hypothesize that each metabolite represents a biologically built-in genetic network for yield; thus, using metabolites for prediction is equivalent to using information integrated from these hidden genetic networks for yield prediction. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zubeldia, Elizabeth H.; Fourtakas, Georgios; Rogers, Benedict D.; Farias, Márcio M.
2018-07-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is developed to model the scouring of two-phase liquid-sediments flows with large deformation. The rheology of sediment scouring due to flows with slow kinematics and high shear forces presents a challenge in terms of spurious numerical fluctuations. This paper bridges the gap between the non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer mechanics which are needed to predict accurately the local erosion phenomena. A critical bed-mobility condition based on the Shields criterion is imposed to the particles located at the sediment surface. Thus, the onset of the erosion process is independent on the pressure field and eliminates the numerical problem of pressure dependant erosion at the interface. This is combined with the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been implemented in the open-source DualSPHysics code accelerated with a graphics processing unit (GPU). The multi-phase model has been compared with 2-D reference numerical models and new experimental data for scour with convergent results. Numerical results for a dry-bed dam break over an erodible bed shows improved agreement with experimental scour and water surface profiles compared to well-known SPH multi-phase models.
Massanet-Nicolau, Jaime; Dinsdale, Richard; Guwy, Alan; Shipley, Gary
2013-02-01
Changes in fermenter gas composition within a given 24h period can cause severe bias in calculations of biogas or energy yields based on just one or two measurements of gas composition per day, as is common in other studies of two-stage fermentation. To overcome this bias, real time recording of gas composition and production were used to undertake a detailed and controlled comparison of single-stage and two-stage fermentation using a real world substrate (wheat feed pellets). When a two-stage fermentation system was used, methane yields increased from 261 L kg(-1)VS using a 20 day HRT, single-stage fermentation, to 359 L kg(-1) VS using a two-stage fermentation with the same overall retention time--an increase of 37%. Additionally a hydrogen yield of 7 L kg(-1) VS was obtained when two-stage fermentation was used. The two-stage system could also be operated at a shorter, 12 day HRT and still produce higher methane yields (306 L kg(-1) VS). Both two-stage fermentation systems evaluated exhibited methane yields in excess of that predicted by a biological methane potential test (BMP) performed using the same feedstock (260 L kg(-1)VS). Copyright © 2012 Elsevier Ltd. All rights reserved.
Cropping management using color and color infrared aerial photographs
NASA Technical Reports Server (NTRS)
Morgan, K. M.; Morris-Jones, D. R.; Lee, G. B.; Kiefer, R. W.
1979-01-01
The Universal Soil Loss Equation (USLE) is a widely accepted tool for erosion prediction and conservation planning. Solving this equation yields the long-term average annual soil loss that can be expected from rill and inter-rill erosion. In this study, manual interpretation of color and color infrared 70 mm photography at the scale of 1:60,000 is used to determine the cropping management factor in the USLE. Accurate information was collected about plowing practices and crop residue cover (unharvested vegetation) for the winter season on agricultural land in Pheasant Branch Creek watershed in Dane County, Wisconsin.
Evaporation kinetics of Mg2SiO4 crystals and melts from molecular dynamics simulations
NASA Technical Reports Server (NTRS)
Kubicki, J. D.; Stolper, E. M.
1993-01-01
Computer simulations based on the molecular dynamics (MD) technique were used to study the mechanisms and kinetics of free evaporation from crystalline and molten forsterite (i.e., Mg2SiO4) on an atomic level. The interatomic potential employed for these simulations reproduces the energetics of bonding in forsterite and in gas-phase MgO and SiO2 reasonably accurately. Results of the simulation include predicted evaporation rates, diffusion rates, and reaction mechanisms for Mg2SiO4(s or l) yields 2Mg(g) + 20(g) + SiO2(g).
Application of two direct runoff prediction methods in Puerto Rico
Sepulveda, N.
1997-01-01
Two methods for predicting direct runoff from rainfall data were applied to several basins and the resulting hydrographs compared to measured values. The first method uses a geomorphology-based unit hydrograph to predict direct runoff through its convolution with the excess rainfall hyetograph. The second method shows how the resulting hydraulic routing flow equation from a kinematic wave approximation is solved using a spectral method based on the matrix representation of the spatial derivative with Chebyshev collocation and a fourth-order Runge-Kutta time discretization scheme. The calibrated Green-Ampt (GA) infiltration parameters are obtained by minimizing the sum, over several rainfall events, of absolute differences between the total excess rainfall volume computed from the GA equations and the total direct runoff volume computed from a hydrograph separation technique. The improvement made in predicting direct runoff using a geomorphology-based unit hydrograph with the ephemeral and perennial stream network instead of the strictly perennial stream network is negligible. The hydraulic routing scheme presented here is highly accurate in predicting the magnitude and time of the hydrograph peak although the much faster unit hydrograph method also yields reasonable results.
Multi-scale Modeling of Plasticity in Tantalum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.« less
An evaluation of the accuracy and speed of metagenome analysis tools
Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.
2016-01-01
Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z. Q.; Chim, W. K.; Chiam, S. Y
2011-11-01
In this work, photoelectron spectroscopy is used to characterize the band alignment of lanthanum aluminate heterostructures which possess a wide range of potential applications. It is found that our experimental slope parameter agrees with theory using the metal-induced gap states model while the interface induced gap states (IFIGS) model yields unsatisfactory results. We show that this discrepancy can be attributed to the correlation between the dielectric work function and the electronegativity in the IFIGS model. It is found that the original trend, as established largely by metals, may not be accurate for larger band gap materials. By using a newmore » correlation, our experimental data shows good agreement of the slope parameter using the IFIGS model. This correlation, therefore, plays a crucial role in heterostructures involving wider bandgap materials for accurate band alignment prediction using the IFIGS model.« less
Charton, C; Guinard-Flament, J; Lefebvre, R; Barbey, S; Gallard, Y; Boichard, D; Larroque, H
2018-03-01
Despite its potential utility for predicting cows' milk yield responses to once-daily milking (ODM), the genetic basis of cow milk trait responses to ODM has been scarcely if ever described in the literature, especially for short ODM periods. This study set out to (1) estimate the genetic determinism of milk yield and composition during a 3-wk ODM period, (2) estimate the genetic determinism of milk yield responses (i.e., milk yield loss upon switching cows to ODM and milk yield recovery upon switching them back to twice-daily milking; TDM), and (3) seek predictors of milk yield responses to ODM, in particular using the first day of ODM. Our trial used 430 crossbred Holstein × Normande cows and comprised 3 successive periods: 1 wk of TDM (control), 3 wk of ODM, and 2 wk of TDM. Implementing ODM for 3 wk reduced milk yield by 27.5% on average, and after resuming TDM cows recovered on average 57% of the milk lost. Heritability estimates in the TDM control period and 3-wk ODM period were, respectively, 0.41 and 0.35 for milk yield, 0.66 and 0.61 for milk fat content, 0.60 and 0.80 for milk protein content, 0.66 and 0.36 for milk lactose content, and 0.20 and 0.15 for milk somatic cell score content. Milk yield and composition during 3-wk ODM and TDM periods were genetically close (within-trait genetic correlations between experimental periods all exceeding 0.80) but were genetically closer within the same milking frequency. Heritabilities of milk yield loss observed upon switching cows to ODM (0.39 and 0.34 for milk yield loss in kg/d and %, respectively) were moderate and similar to milk yield heritabilities. Milk yield recovery (kg/d) upon resuming TDM was a trait of high heritability (0.63). Because they are easy to measure, TDM milk yield and composition and milk yield responses on the first day of ODM were investigated as predictors of milk yield responses to a 3-wk ODM to easily detect animals that are well adapted to ODM. Twice-daily milking milk yield and composition were found to be partly genetically correlated with milk yield responses but not closely enough for practical application. With genetic correlations of 0.98 and 0.96 with 3-wk ODM milk yield losses (in kg/d and %, respectively), milk yield losses on the first day of ODM proved to be more accurate in predicting milk yield responses on longer term ODM than TDM milk yield. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Kocjan, Tomaz; Janez, Andrej; Stankovic, Milenko; Vidmar, Gaj; Jensterle, Mojca
2016-05-01
Adrenal venous sampling (AVS) is the only available method to distinguish bilateral from unilateral primary aldosteronism (PA). AVS has several drawbacks, so it is reasonable to avoid this procedure when the results would not affect clinical management. Our objective was to identify a clinical criterion that can reliably predict nonlateralized AVS as a surrogate for bilateral PA that is not treated surgically. A retrospective diagnostic cross-sectional study conducted at Slovenian national endocrine referral center included 69 consecutive patients (mean age 56 ± 8 years, 21 females) with PA who underwent AVS. PA was confirmed with the saline infusion test (SIT). AVS was performed sequentially during continuous adrenocorticotrophic hormone (ACTH) infusion. The main outcome measures were variables associated with nonlateralized AVS to derive a clinical prediction rule. Sixty-seven (97%) patients had a successful AVS and were included in the statistical analysis. A total of 39 (58%) patients had nonlateralized AVS. The combined criterion of serum potassium ≥3.5 mmol/L, post-SIT aldosterone <18 ng/dL, and either no or bilateral tumor found on computed tomography (CT) imaging had perfect estimated specificity (and thus 100% positive predictive value) for bilateral PA, saving an estimated 16% of the patients (11/67) from unnecessary AVS. The best overall classification accuracy (50/67 = 75%) was achieved using the post-SIT aldosterone level <18 ng/dL alone, which yielded 74% sensitivity and 75% specificity for predicting nonlateralized AVS. Our clinical prediction criterion appears to accurately determine a subset of patients with bilateral PA who could avoid unnecessary AVS and immediately commence with medical treatment.
Person, M.; Konikow, Leonard F.
1986-01-01
A solute-transport model of an irrigated stream-aquifer system was recalibrated because of discrepancies between prior predictions of ground-water salinity trends during 1971-1982 and the observed outcome in February 1982. The original model was calibrated with a 1-year record of data collected during 1971-1972 in an 18-km reach of the Arkansas River Valley in southeastern Colorado. The model is improved by incorporating additional hydrologic processes (salt transport through the unsaturated zone) and through reexamination of the reliability of some input data (regression relationship used to estimate salinity from specific conductance data). Extended simulations using the recalibrated model are made to investigate the usefulness of the model for predicting long-term trends of salinity and water levels within the study area. Predicted ground-water levels during 1971-1982 are in good agreement with the observed, indicating that the original 1971-1972 study period was sufficient to calibrate the flow model. However, long-term simulations using the recalibrated model based on recycling the 1971-1972 data alone yield an average ground-water salinity for 1982 that is too low by about 10%. Simulations that incorporate observed surface-water salinity variations yield better results, in that the calculated average ground-water salinity for 1982 is within 3% of the observed value. Statistical analysis of temporal salinity variations of the applied surface water indicates that at least a 4-year sampling period is needed to accurately calibrate the transport model. ?? 1986.
Masili, Alice; Puligheddu, Sonia; Sassu, Lorenzo; Scano, Paola; Lai, Adolfo
2012-11-01
In this work, we report the feasibility study to predict the properties of neat crude oil samples from 300-MHz NMR spectral data and partial least squares (PLS) regression models. The study was carried out on 64 crude oil samples obtained from 28 different extraction fields and aims at developing a rapid and reliable method for characterizing the crude oil in a fast and cost-effective way. The main properties generally employed for evaluating crudes' quality and behavior during refining were measured and used for calibration and testing of the PLS models. Among these, the UOP characterization factor K (K(UOP)) used to classify crude oils in terms of composition, density (D), total acidity number (TAN), sulfur content (S), and true boiling point (TBP) distillation yields were investigated. Test set validation with an independent set of data was used to evaluate model performance on the basis of standard error of prediction (SEP) statistics. Model performances are particularly good for K(UOP) factor, TAN, and TPB distillation yields, whose standard error of calibration and SEP values match the analytical method precision, while the results obtained for D and S are less accurate but still useful for predictions. Furthermore, a strategy that reduces spectral data preprocessing and sample preparation procedures has been adopted. The models developed with such an ample crude oil set demonstrate that this methodology can be applied with success to modern refining process requirements. Copyright © 2012 John Wiley & Sons, Ltd.
Predicting Great Lakes fish yields: tools and constraints
Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.
1987-01-01
Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.
Dama, Elisa; Tillhon, Micol; Bertalot, Giovanni; de Santis, Francesca; Troglio, Flavia; Pessina, Simona; Passaro, Antonio; Pece, Salvatore; de Marinis, Filippo; Dell'Orto, Patrizia; Viale, Giuseppe; Spaggiari, Lorenzo; Di Fiore, Pier Paolo; Bianchi, Fabrizio; Barberis, Massimo; Vecchi, Manuela
2016-06-14
Accurate detection of altered anaplastic lymphoma kinase (ALK) expression is critical for the selection of lung cancer patients eligible for ALK-targeted therapies. To overcome intrinsic limitations and discrepancies of currently available companion diagnostics for ALK, we developed a simple, affordable and objective PCR-based predictive model for the quantitative measurement of any ALK fusion as well as wild-type ALK upregulation. This method, optimized for low-quantity/-quality RNA from FFPE samples, combines cDNA pre-amplification with ad hoc generated calibration curves. All the models we derived yielded concordant predictions when applied to a cohort of 51 lung tumors, and correctly identified all 17 ALK FISH-positive and 33 of the 34 ALK FISH-negative samples. The one discrepant case was confirmed as positive by IHC, thus raising the accuracy of our test to 100%. Importantly, our method was accurate when using low amounts of input RNA (10 ng), also in FFPE samples with limited tumor cellularity (5-10%) and in FFPE cytology specimens. Thus, our test is an easily implementable diagnostic tool for the rapid, efficacious and cost-effective screening of ALK status in patients with lung cancer.
NASA Astrophysics Data System (ADS)
Fedosov, Dmitry
2011-03-01
Computational biophysics is a large and rapidly growing area of computational physics. In this talk, we will focus on a number of biophysical problems related to blood cells and blood flow in health and disease. Blood flow plays a fundamental role in a wide range of physiological processes and pathologies in the organism. To understand and, if necessary, manipulate the course of these processes it is essential to investigate blood flow under realistic conditions including deformability of blood cells, their interactions, and behavior in the complex microvascular network. Using a multiscale cell model we are able to accurately capture red blood cell mechanics, rheology, and dynamics in agreement with a number of single cell experiments. Further, this validated model yields accurate predictions of the blood rheological properties, cell migration, cell-free layer, and hemodynamic resistance in microvessels. In addition, we investigate blood related changes in malaria, which include a considerable stiffening of red blood cells and their cytoadherence to endothelium. For these biophysical problems computational modeling is able to provide new physical insights and capabilities for quantitative predictions of blood flow in health and disease.
NASA Astrophysics Data System (ADS)
Palodiya, Vikram; Raghuwanshi, Sanjeev Kumar
2017-12-01
In this paper, the domain inversion is used in a simple fashion to improve the performance of a Z-cut highly integrated LiNbO3 optical modulator (LNOM). The Z-cut modulator having ≤ 3 V switching voltage and bandwidth of 15 GHz for an external modulator in which traveling-wave electrode length L_{m} imposed the modulating voltage, the product of V_π and L_{m} is fixed for a given electro-optic material (EOM). An investigation to achieve a low V_π by both magnitude of the electro-optic coefficient (EOC) for a wide variety of EOMs has been reported. The Sellmeier equation (SE) for the extraordinary index of congruent LiNbO3 is derived. The predictions related to phase matching are accurate between room temperature and 250 °C and wavelength ranging from 0.4 to 5 μm. The SE predicts more accurate refractive indices (RI) at long wavelengths. The different overlaps between the waveguides for the Z-cut structure are shown to yield a chirp parameter that can able to adjust 0-0.7. Theoretical results are perfectly verified by simulated results.
Metal nanoplates: Smaller is weaker due to failure by elastic instability
NASA Astrophysics Data System (ADS)
Ho, Duc Tam; Kwon, Soon-Yong; Park, Harold S.; Kim, Sung Youb
2017-11-01
Under mechanical loading, crystalline solids deform elastically, and subsequently yield and fail via plastic deformation. Thus crystalline materials experience two mechanical regimes: elasticity and plasticity. Here, we provide numerical and theoretical evidence to show that metal nanoplates exhibit an intermediate mechanical regime that occurs between elasticity and plasticity, which we call the elastic instability regime. The elastic instability regime begins with a decrease in stress, during which the nanoplates fail via global, and not local, deformation mechanisms that are distinctly different from traditional dislocation-mediated plasticity. Because the nanoplates fail via elastic instability, the governing strength criterion is the ideal strength, rather than the yield strength, and as a result, we observe a unique "smaller is weaker" trend. We develop a simple surface-stress-based analytic model to predict the ideal strength of the metal nanoplates, which accurately reproduces the smaller is weaker behavior observed in the atomistic simulations.
LSS 2018: A double-lined spectroscopic binary central star with an extremely large reflection effect
NASA Technical Reports Server (NTRS)
Drilling, J. S.
1985-01-01
LSS 2018, the central star of the planetry nebulae DS1, was found to be a double-lined spectroscopic binary with a period of 8.571 hours. Light variations with the same period were observed in U, B, and V; in the wavelength regions defined by the two IUE cameras; and in the strength of the CIII 4647 emission line. The light variations can be accurately predicted by a simple reflection effect, and an analysis of the light curves yields the angular diameter and effective temperature of the primary, the radii of the two stars in terms of their separation, and the inclination of the system. Analysis of the radial velocities then yields the masses of the two stars, their separation, the distance of the system, the absolute magnitude of the primary, and the size of the nebula.
ANSYS Modeling of Hydrostatic Stress Effects
NASA Technical Reports Server (NTRS)
Allen, Phillip A.
1999-01-01
Classical metal plasticity theory assumes that hydrostatic pressure has no effect on the yield and postyield behavior of metals. Plasticity textbooks, from the earliest to the most modem, infer that there is no hydrostatic effect on the yielding of metals, and even modem finite element programs direct the user to assume the same. The object of this study is to use the von Mises and Drucker-Prager failure theory constitutive models in the finite element program ANSYS to see how well they model conditions of varying hydrostatic pressure. Data is presented for notched round bar (NRB) and "L" shaped tensile specimens. Similar results from finite element models in ABAQUS are shown for comparison. It is shown that when dealing with geometries having a high hydrostatic stress influence, constitutive models that have a functional dependence on hydrostatic stress are more accurate in predicting material behavior than those that are independent of hydrostatic stress.
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-12-01
Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.
Hatfield, L.A.; Gutreuter, S.; Boogaard, M.A.; Carlin, B.P.
2011-01-01
Estimation of extreme quantal-response statistics, such as the concentration required to kill 99.9% of test subjects (LC99.9), remains a challenge in the presence of multiple covariates and complex study designs. Accurate and precise estimates of the LC99.9 for mixtures of toxicants are critical to ongoing control of a parasitic invasive species, the sea lamprey, in the Laurentian Great Lakes of North America. The toxicity of those chemicals is affected by local and temporal variations in water chemistry, which must be incorporated into the modeling. We develop multilevel empirical Bayes models for data from multiple laboratory studies. Our approach yields more accurate and precise estimation of the LC99.9 compared to alternative models considered. This study demonstrates that properly incorporating hierarchical structure in laboratory data yields better estimates of LC99.9 stream treatment values that are critical to larvae control in the field. In addition, out-of-sample prediction of the results of in situ tests reveals the presence of a latent seasonal effect not manifest in the laboratory studies, suggesting avenues for future study and illustrating the importance of dual consideration of both experimental and observational data. ?? 2011, The International Biometric Society.
Dube, Timothy; Mutanga, Onisimo; Adam, Elhadi; Ismail, Riyad
2014-01-01
The quantification of aboveground biomass using remote sensing is critical for better understanding the role of forests in carbon sequestration and for informed sustainable management. Although remote sensing techniques have been proven useful in assessing forest biomass in general, more is required to investigate their capabilities in predicting intra-and-inter species biomass which are mainly characterised by non-linear relationships. In this study, we tested two machine learning algorithms, Stochastic Gradient Boosting (SGB) and Random Forest (RF) regression trees to predict intra-and-inter species biomass using high resolution RapidEye reflectance bands as well as the derived vegetation indices in a commercial plantation. The results showed that the SGB algorithm yielded the best performance for intra-and-inter species biomass prediction; using all the predictor variables as well as based on the most important selected variables. For example using the most important variables the algorithm produced an R2 of 0.80 and RMSE of 16.93 t·ha−1 for E. grandis; R2 of 0.79, RMSE of 17.27 t·ha−1 for P. taeda and R2 of 0.61, RMSE of 43.39 t·ha−1 for the combined species data sets. Comparatively, RF yielded plausible results only for E. dunii (R2 of 0.79; RMSE of 7.18 t·ha−1). We demonstrated that although the two statistical methods were able to predict biomass accurately, RF produced weaker results as compared to SGB when applied to combined species dataset. The result underscores the relevance of stochastic models in predicting biomass drawn from different species and genera using the new generation high resolution RapidEye sensor with strategically positioned bands. PMID:25140631
Functional classification of protein structures by local structure matching in graph representation.
Mills, Caitlyn L; Garg, Rohan; Lee, Joslynn S; Tian, Liang; Suciu, Alexandru; Cooperman, Gene; Beuning, Penny J; Ondrechen, Mary Jo
2018-03-31
As a result of high-throughput protein structure initiatives, over 14,400 protein structures have been solved by structural genomics (SG) centers and participating research groups. While the totality of SG data represents a tremendous contribution to genomics and structural biology, reliable functional information for these proteins is generally lacking. Better functional predictions for SG proteins will add substantial value to the structural information already obtained. Our method described herein, Graph Representation of Active Sites for Prediction of Function (GRASP-Func), predicts quickly and accurately the biochemical function of proteins by representing residues at the predicted local active site as graphs rather than in Cartesian coordinates. We compare the GRASP-Func method to our previously reported method, structurally aligned local sites of activity (SALSA), using the ribulose phosphate binding barrel (RPBB), 6-hairpin glycosidase (6-HG), and Concanavalin A-like Lectins/Glucanase (CAL/G) superfamilies as test cases. In each of the superfamilies, SALSA and the much faster method GRASP-Func yield similar correct classification of previously characterized proteins, providing a validated benchmark for the new method. In addition, we analyzed SG proteins using our SALSA and GRASP-Func methods to predict function. Forty-one SG proteins in the RPBB superfamily, nine SG proteins in the 6-HG superfamily, and one SG protein in the CAL/G superfamily were successfully classified into one of the functional families in their respective superfamily by both methods. This improved, faster, validated computational method can yield more reliable predictions of function that can be used for a wide variety of applications by the community. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Exchange inlet optimization by genetic algorithm for improved RBCC performance
NASA Astrophysics Data System (ADS)
Chorkawy, G.; Etele, J.
2017-09-01
A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Dianyong; He Jun; Nuclear Theory Group, Institute of Modern Physics of CAS, Lanzhou 730000
2011-10-01
Considering the defects of the previous work for estimating the anomalous production rates of e{sup +}e{sup -}{yields}{Upsilon}(1S){pi}{sup +}{pi}{sup -}, {Upsilon}(2S){pi}{sup +}{pi}{sup -} near the peak of the {Upsilon}(5S) resonance at {radical}(s)=10.87 GeV [K. F. Chen et al. (Belle Collaboration), Phys. Rev. Lett. 100, 112001 (2008)], we suggest a new scenario where the contributions from the direct dipion transition and the final-state interactions interfere to result in not only the anomalously large production rates, but also line shapes of the differential widths consistent with experimental measurement when assuming the reactions are due to the dipion emission of {Upsilon}(5S). At the end,more » we raise a new puzzle that the predicted differential width d{Gamma}({Upsilon}(5S){yields}{Upsilon}(2S){pi}{sup +}{pi}{sup -})/dcos{theta} has a discrepant trend from the data while other predictions are well in accord with the data. It should be further clarified by more accurate measurements carried by future experiments.« less
Microscopic predictions of fission yields based on the time dependent GCM formalism
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Schunck, N.; Verrière, M.
2016-03-01
Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r-process to fuel cycle optimization in nuclear energy. The need for a predictive theory applicable where no data is available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. One of the most promising theoretical frameworks is the time-dependent generator coordinate method (TDGCM) applied under the Gaussian overlap approximation (GOA). Previous studies reported promising results by numerically solving the TDGCM+GOA equation with a finite difference technique. However, the computational cost of this method makes it difficult to properly control numerical errors. In addition, it prevents one from performing calculations with more than two collective variables. To overcome these limitations, we developed the new code FELIX-1.0 that solves the TDGCM+GOA equation based on the Galerkin finite element method. In this article, we briefly illustrate the capabilities of the solver FELIX-1.0, in particular its validation for n+239Pu low energy induced fission. This work is the result of a collaboration between CEA,DAM,DIF and LLNL on nuclear fission theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler
The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less
Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler; ...
2016-07-02
The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less
Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten
2016-09-01
The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.
Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten
2016-01-01
The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244
Cacho, J; Sevillano, J; de Castro, J; Herrera, E; Ramos, M P
2008-11-01
Insulin resistance plays a role in the pathogenesis of diabetes, including gestational diabetes. The glucose clamp is considered the gold standard for determining in vivo insulin sensitivity, both in human and in animal models. However, the clamp is laborious, time consuming and, in animals, requires anesthesia and collection of multiple blood samples. In human studies, a number of simple indexes, derived from fasting glucose and insulin levels, have been obtained and validated against the glucose clamp. However, these indexes have not been validated in rats and their accuracy in predicting altered insulin sensitivity remains to be established. In the present study, we have evaluated whether indirect estimates based on fasting glucose and insulin levels are valid predictors of insulin sensitivity in nonpregnant and 20-day-pregnant Wistar and Sprague-Dawley rats. We have analyzed the homeostasis model assessment of insulin resistance (HOMA-IR), the quantitative insulin sensitivity check index (QUICKI), and the fasting glucose-to-insulin ratio (FGIR) by comparing them with the insulin sensitivity (SI(Clamp)) values obtained during the hyperinsulinemic-isoglycemic clamp. We have performed a calibration analysis to evaluate the ability of these indexes to accurately predict insulin sensitivity as determined by the reference glucose clamp. Finally, to assess the reliability of these indexes for the identification of animals with impaired insulin sensitivity, performance of the indexes was analyzed by receiver operating characteristic (ROC) curves in Wistar and Sprague-Dawley rats. We found that HOMA-IR, QUICKI, and FGIR correlated significantly with SI(Clamp), exhibited good sensitivity and specificity, accurately predicted SI(Clamp), and yielded lower insulin sensitivity in pregnant than in nonpregnant rats. Together, our data demonstrate that these indexes provide an easy and accurate measure of insulin sensitivity during pregnancy in the rat.
Remote-sensing-based rapid assessment of flood crop loss to support USDA flooding decision-making
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Yang, Z.; Hipple, J.; Shrestha, R.
2016-12-01
Floods often cause significant crop loss in the United States. Timely and objective assessment of flood-related crop loss is very important for crop monitoring and risk management in agricultural and disaster-related decision-making in USDA. Among all flood-related information, crop yield loss is particularly important. Decision on proper mitigation, relief, and monetary compensation relies on it. Currently USDA mostly relies on field surveys to obtain crop loss information and compensate farmers' loss claim. Such methods are expensive, labor intensive, and time consumptive, especially for a large flood that affects a large geographic area. Recent studies have demonstrated that Earth observation (EO) data are useful in post-flood crop loss assessment for a large geographic area objectively, timely, accurately, and cost effectively. There are three stages of flood damage assessment, including rapid assessment, early recovery assessment, and in-depth assessment. EO-based flood assessment methods currently rely on the time-series of vegetation index to assess the yield loss. Such methods are suitable for in-depth assessment but are less suitable for rapid assessment since the after-flood vegetation index time series is not available. This presentation presents a new EO-based method for the rapid assessment of crop yield loss immediately after a flood event to support the USDA flood decision making. The method is based on the historic records of flood severity, flood duration, flood date, crop type, EO-based both before- and immediate-after-flood crop conditions, and corresponding crop yield loss. It hypotheses that a flood of same severity occurring at the same pheonological stage of a crop will cause the similar damage to the crop yield regardless the flood years. With this hypothesis, a regression-based rapid assessment algorithm can be developed by learning from historic records of flood events and corresponding crop yield loss. In this study, historic records of MODIS-based flood and vegetation products and USDA/NASS crop type and crop yield data are used to train the regression-based rapid assessment algorithm. Validation of the rapid assessment algorithm indicates it can predict the yield loss at 90% accuracy, which is accurate enough to support USDA on flood-related quick response and mitigation.
Modeling soil erosion processes on a hillslope with dendritic rill network
NASA Astrophysics Data System (ADS)
Chen, L.; Wu, S.
2017-12-01
The effect of planform of dendritic rill network on hillslope rainfall-runoff and soil erosion processes was usually neglected in previous studies, which, however, could dramatically alter the mechanisms of the hydrologic and geomorphic processes. In the present study, the interrill areas were treated as two-dimensional (2D), while the complicated rill network was represented by a piecewise one-dimensional (1D) rill retaining the characteristic of rill network (the rill density and average rill deflection angle). Based on a 2D diffusive wave overland flow model, and the WEPP erosion theory, the 1D and 2D coupling model was developed to simulate the hillslope runoff and soil erosion on both the interrill areas and the representative rill. The rill number and rill inclination angle were introduced in the model to reflect the actual rill density, rill length, rill slope gradient, and confluence processes from the interrill areas to the rill. The excess rainfall and sediment load coming into the representative rill were not only from the two lateral interrill areas but also from the upstream interrill areas. The model was successfully tested against experimental data obtained from a hillslope with complicated rill network. Comparison of the results obtained from the present model with WEPP indicates that WEPP calculated the hillslope runoff yield accurately but overestimated the amount of rill erosion. Moreover, the effects of rill deflection angle and rill number distribution on both interrill and rill erosions were examined and found neglecting the planar characteristic of rill network has a considerable impact on soil erosion prediction. It is expected that the model can extend the scope of WEPP application and predict more accurately the runoff and erosion yield on a hillslope with complicated rill network.
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Tiezzi, Francesco; Maltecca, Christian
2015-04-02
Genomic BLUP (GBLUP) can predict breeding values for non-phenotyped individuals based on the identity-by-state genomic relationship matrix (G). The G matrix can be constructed from thousands of markers spread across the genome. The strongest assumption of G and consequently of GBLUP is that all markers contribute equally to the genetic variance of a trait. This assumption is violated for traits that are controlled by a small number of quantitative trait loci (QTL) or individual QTL with large effects. In this paper, we investigate the performance of using a weighted genomic relationship matrix (wG) that takes into consideration the genetic architecture of the trait in order to improve predictive ability for a wide range of traits. Multiple methods were used to calculate weights for several economically relevant traits in US Holstein dairy cattle. Predictive performance was tested by k-means cross-validation. Relaxing the GBLUP assumption of equal marker contribution by increasing the weight that is given to a specific marker in the construction of the trait-specific G resulted in increased predictive performance. The increase was strongest for traits that are controlled by a small number of QTL (e.g. fat and protein percentage). Furthermore, bias in prediction estimates was reduced compared to that resulting from the use of regular G. Even for traits with low heritability and lower general predictive performance (e.g. calving ease traits), weighted G still yielded a gain in accuracy. Genomic relationship matrices weighted by marker realized variance yielded more accurate and less biased predictions for traits regulated by few QTL. Genome-wide association analyses were used to derive marker weights for creating weighted genomic relationship matrices. However, this can be cumbersome and prone to low stability over generations because of erosion of linkage disequilibrium between markers and QTL. Future studies may include other sources of information, such as functional annotation and gene networks, to better exploit the genetic architecture of traits and produce more stable predictions.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Quantum Yields in Mixed-Conifer Forests and Ponderosa Pine Plantations
NASA Astrophysics Data System (ADS)
Wei, L.; Marshall, J. D.; Zhang, J.
2008-12-01
Most process-based physiological models require canopy quantum yield of photosynthesis as a starting point to simulate carbon sequestration and subsequently gross primary production (GPP). The quantum yield is a measure of photosynthetic efficiency expressed in moles of CO2 assimilated per mole of photons absorbed; the process is influenced by environmental factors. In the summer 2008, we measured quantum yields on both sun and shade leaves for four conifer species at five sites within Mica Creek Experimental Watershed (MCEW) in northern Idaho and one conifer species at three sites in northern California. The MCEW forest is typical of mixed conifer stands dominated by grand fir (Abies grandis (Douglas ex D. Don) Lindl.). In northern California, the three sites with contrasting site qualities are ponderosa pine (Pinus ponderosa C. Lawson var. ponderosa) plantations that were experimentally treated with vegetation control, fertilization, and a combination of both. We found that quantum yields in MCEW ranged from ~0.045 to ~0.075 mol CO2 per mol incident photon. However, there were no significant differences between canopy positions, or among sites or tree species. In northern California, the mean value of quantum yield of three sites was 0.051 mol CO2/mol incident photon. No significant difference in quantum yield was found between canopy positions, or among treatments or sites. The results suggest that these conifer species maintain relatively consistent quantum yield in both MCEW and northern California. This consistency simplifies the use of a process-based model to accurately predict forest productivity in these areas.
A novel method for structure-based prediction of ion channel conductance properties.
Smart, O S; Breed, J; Smith, G R; Sansom, M S
1997-01-01
A rapid and easy-to-use method of predicting the conductance of an ion channel from its three-dimensional structure is presented. The method combines the pore dimensions of the channel as measured in the HOLE program with an Ohmic model of conductance. An empirically based correction factor is then applied. The method yielded good results for six experimental channel structures (none of which were included in the training set) with predictions accurate to within an average factor of 1.62 to the true values. The predictive r2 was equal to 0.90, which is indicative of a good predictive ability. The procedure is used to validate model structures of alamethicin and phospholamban. Two genuine predictions for the conductance of channels with known structure but without reported conductances are given. A modification of the procedure that calculates the expected results for the effect of the addition of nonelectrolyte polymers on conductance is set out. Results for a cholera toxin B-subunit crystal structure agree well with the measured values. The difficulty in interpreting such studies is discussed, with the conclusion that measurements on channels of known structure are required. Images FIGURE 1 FIGURE 3 FIGURE 4 FIGURE 6 FIGURE 10 PMID:9138559
NASA Astrophysics Data System (ADS)
Pellereau, E.; Taïeb, J.; Chatillon, A.; Alvarez-Pol, H.; Audouin, L.; Ayyad, Y.; Bélier, G.; Benlliure, J.; Boutoux, G.; Caamaño, M.; Casarejos, E.; Cortina-Gil, D.; Ebran, A.; Farget, F.; Fernández-Domínguez, B.; Gorbinet, T.; Grente, L.; Heinz, A.; Johansson, H.; Jurado, B.; Kelić-Heil, A.; Kurz, N.; Laurent, B.; Martin, J.-F.; Nociforo, C.; Paradela, C.; Pietri, S.; Rodríguez-Sánchez, J. L.; Schmidt, K.-H.; Simon, H.; Tassan-Got, L.; Vargas, J.; Voss, B.; Weick, H.
2017-05-01
SOFIA (Studies On Fission with Aladin) is a novel experimental program, dedicated to accurate measurements of fission-fragment isotopic yields. The setup allows us to fully identify, in nuclear charge and mass, both fission fragments in coincidence for the whole fission-fragment range. It was installed at the GSI facility (Darmstadt), to benefit from the relativistic heavy-ion beams available there, and thus to use inverse kinematics. This paper reports on fission yields obtained in electromagnetically induced fission of 238U.
Powder diffraction and crystal structure prediction identify four new coumarin polymorphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shtukenberg, Alexander G.; Zhu, Qiang; Carter, Damien J.
Coumarin, a simple, commodity chemical isolated from beans in 1820, has, to date, only yielded one solid state structure. Here, we report a rich polymorphism of coumarin grown from the melt. Four new metastable forms were identified and their crystal structures were solved using a combination of computational crystal structure prediction algorithms and X-ray powder diffraction. With five crystal structures, coumarin has become one of the few rigid molecules showing extensive polymorphism at ambient conditions. We demonstrate the crucial role of advanced electronic structure calculations including many-body dispersion effects for accurate ranking of the stability of coumarin polymorphs and themore » need to account for anharmonic vibrational contributions to their free energy. As such, coumarin is a model system for studying weak intermolecular interactions, crystallization mechanisms, and kinetic effects.« less
Predictive data-based exposition of 5s5p{sup 1,3}P{sub 1} lifetimes in the Cd isoelectronic sequence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L. J.; Matulioniene, R.; Ellis, D. G.
2000-11-01
Experimental and theoretical values for the lifetimes of the 5s5p{sup 1}P{sub 1} and {sup 3}P{sub 1} levels in the Cd isoelectronic sequence are examined in the context of a data-based isoelectronic systematization. Lifetime and energy-level data are combined to account for the effects of intermediate coupling, thereby reducing the data to a regular and slowly varying parametric mapping. This empirically characterizes small contributions due to spin-other-orbit interaction, spin dependences of the radial wave functions, and configuration interaction, and yields accurate interpolative and extrapolative predictions. Multiconfiguration Dirac-Hartree-Fock calculations are used to verify the regularity of these trends, and to examine themore » extent to which they can be extrapolated to high nuclear charge.« less
Powder diffraction and crystal structure prediction identify four new coumarin polymorphs
Shtukenberg, Alexander G.; Zhu, Qiang; Carter, Damien J.; ...
2017-05-15
Coumarin, a simple, commodity chemical isolated from beans in 1820, has, to date, only yielded one solid state structure. Here, we report a rich polymorphism of coumarin grown from the melt. Four new metastable forms were identified and their crystal structures were solved using a combination of computational crystal structure prediction algorithms and X-ray powder diffraction. With five crystal structures, coumarin has become one of the few rigid molecules showing extensive polymorphism at ambient conditions. We demonstrate the crucial role of advanced electronic structure calculations including many-body dispersion effects for accurate ranking of the stability of coumarin polymorphs and themore » need to account for anharmonic vibrational contributions to their free energy. As such, coumarin is a model system for studying weak intermolecular interactions, crystallization mechanisms, and kinetic effects.« less
Modeling the growth of Listeria monocytogenes in mold-ripened cheeses.
Lobacz, Adriana; Kowalik, Jaroslaw; Tarczynska, Anna
2013-06-01
This study presents possible applications of predictive microbiology to model the safety of mold-ripened cheeses with respect to bacteria of the species Listeria monocytogenes during (1) the ripening of Camembert cheese, (2) cold storage of Camembert cheese at temperatures ranging from 3 to 15°C, and (3) cold storage of blue cheese at temperatures ranging from 3 to 15°C. The primary models used in this study, such as the Baranyi model and modified Gompertz function, were fitted to growth curves. The Baranyi model yielded the most accurate goodness of fit and the growth rates generated by this model were used for secondary modeling (Ratkowsky simple square root and polynomial models). The polynomial model more accurately predicted the influence of temperature on the growth rate, reaching the adjusted coefficients of multiple determination 0.97 and 0.92 for Camembert and blue cheese, respectively. The observed growth rates of L. monocytogenes in mold-ripened cheeses were compared with simulations run with the Pathogen Modeling Program (PMP 7.0, USDA, Wyndmoor, PA) and ComBase Predictor (Institute of Food Research, Norwich, UK). However, the latter predictions proved to be consistently overestimated and contained a significant error level. In addition, a validation process using independent data generated in dairy products from the ComBase database (www.combase.cc) was performed. In conclusion, it was found that L. monocytogenes grows much faster in Camembert than in blue cheese. Both the Baranyi and Gompertz models described this phenomenon accurately, although the Baranyi model contained a smaller error. Secondary modeling and further validation of the generated models highlighted the issue of usability and applicability of predictive models in the food processing industry by elaborating models targeted at a specific product or a group of similar products. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Reynolds, Sheila M; Bilmes, Jeff A; Noble, William Stafford
2010-07-08
DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the remaining nucleosomes follow a statistical positioning model.
Reynolds, Sheila M.; Bilmes, Jeff A.; Noble, William Stafford
2010-01-01
DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence—301 base pairs, centered at the position to be scored—with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the remaining nucleosomes follow a statistical positioning model. PMID:20628623
NASA Technical Reports Server (NTRS)
Haugen, H. K.; Weitz, E.; Leone, S. R.
1985-01-01
Various techniques have been used to study photodissociation dynamics of the halogens and interhalogens. The quantum yields obtained by these techniques differ widely. The present investigation is concerned with a qualitatively new approach for obtaining highly accurate quantum yields for electronically excited states. This approach makes it possible to obtain an accuracy of 1 percent to 3 percent. It is shown that measurement of the initial transient gain/absorption vs the final absorption in a single time-resolved signal is a very accurate technique in the study of absolute branching fractions in photodissociation. The new technique is found to be insensitive to pulse and probe laser characteristics, molecular absorption cross sections, and absolute precursor density.
Remotely sensed rice yield prediction using multi-temporal NDVI data derived from NOAA's-AVHRR.
Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun
2013-01-01
Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha(-1). Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly.
Remotely Sensed Rice Yield Prediction Using Multi-Temporal NDVI Data Derived from NOAA's-AVHRR
Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun
2013-01-01
Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha−1. Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly. PMID:23967112
NASA Astrophysics Data System (ADS)
Soucemarianadin, Laure; Barré, Pierre; Baudin, François; Chenu, Claire; Houot, Sabine; Kätterer, Thomas; Macdonald, Andy; van Oort, Folkert; Plante, Alain F.; Cécillon, Lauric
2017-04-01
The organic carbon reservoir of soils is a key component of climate change, calling for an accurate knowledge of the residence time of soil organic carbon (SOC). Existing proxies of the size of SOC labile pool such as SOC fractionation or respiration tests are time consuming and unable to consistently predict SOC mineralization over years to decades. Similarly, models of SOC dynamics often yield unrealistic values of the size of SOC kinetic pools. Thermal analysis of bulk soil samples has recently been shown to provide useful and cost-effective information regarding the long-term in-situ decomposition of SOC. Barré et al. (2016) analyzed soil samples from long-term bare fallow sites in northwestern Europe using Rock-Eval 6 pyrolysis (RE6), and demonstrated that persistent SOC is thermally more stable and has less hydrogen-rich compounds (low RE6 HI parameter) than labile SOC. The objective of this study was to predict SOC loss over a 20-year period (i.e. the size of the SOC pool with a residence time lower than 20 years) using RE6 indicators. Thirty-six archive soil samples coming from 4 long-term bare fallow chronosequences (Grignon, France; Rothamsted, Great Britain; Ultuna, Sweden; Versailles, France) were used in this study. For each sample, the value of bi-decadal SOC mineralization was obtained from the observed SOC dynamics of its long-term bare fallow plot (approximated by a spline function). Those values ranged from 0.8 to 14.3 gC·kg-1 (concentration data), representing 8.6 to 50.6% of total SOC (proportion data). All samples were analyzed using RE6 and simple linear regression models were used to predict bi-decadal SOC loss (concentration and proportion data) from 4 RE6 parameters: HI, OI, PC/SOC and T50 CO2 oxidation. HI (the amount of hydrogen-rich effluents formed during the pyrolysis phase of RE6; mgCH.g-1SOC) and OI (the CO2 yield during the pyrolysis phase of RE6; mgCO2.g-1SOC) parameters describe SOC bulk chemistry. PC/SOC (the amount of organic C evolved during the pyrolysis phase of RE6; % of total SOC) and T50 CO2 oxidation (the temperature at which 50% of the residual organic C was oxidized to CO2 during the RE6 oxidation phase; °C) parameters represent SOC thermal stability. The RE6 HI parameter yielded the best predictions of bi-decadal SOC mineralization, for both concentration (R2 = 0.75) and proportion (R2 = 0.66) data. PC/SOC and T50 CO2 oxidation parameters also yielded significant regression models with R2 = 0.68 and 0.42 for concentration data and R2 = 0.59 and 0.26 for proportion data, respectively. The OI parameter was not a good predictor of bi-decadal SOC loss, with non-significant regression models. The RE6 thermal analysis method can predict in-situ SOC biogeochemical stability. SOC chemical composition, and to a lesser SOC thermal stability, are related to its bi-decadal dynamics. RE6 appears to be a more accurate and convenient proxy of the size of the bi-decadal labile SOC pool than other existing methodologies. Future developments include the validation of these RE6 models of bi-decadal SOC loss on soils from contrasted pedoclimatic conditions. Reference: Barré et al., 2016. Biogeochemistry 130, 1-12
NASA Technical Reports Server (NTRS)
OBrien, T. Kevin; Chawan, Arun D.; DeMarco, Kevin; Paris, Isabelle
2001-01-01
The influence of specimen polishing, configuration, and size on the transverse tension strength of two glass-epoxy materials, and one carbon-epoxy material, loaded in three and four point bending was evaluated. Polishing machined edges, arid/or tension side failure surfaces, was detrimental to specimen strength characterization instead of yielding a higher, more accurate, strength as a result of removing inherent manufacture and handling flaws. Transverse tension strength was typically lower for longer span lengths due to the classical weakest link effect. However, strength was less sensitive to volume changes achieved by increasing specimen width. The Weibull scaling law typically over-predicted changes in transverse tension strengths in three point bend tests and under-predicted changes in transverse tension strengths in four point bend tests. Furthermore, the Weibull slope varied with specimen configuration, volume, and sample size. Hence, this scaling law was not adequate for predicting transverse tension strength of heterogeneous, fiber-reinforced, polymer matrix composites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schultz, Peter A.
For the purposes of making reliable first-principles predictions of defect energies in semiconductors, it is crucial to distinguish between effective-mass-like defects, which cannot be treated accurately with existing supercell methods, and deep defects, for which density functional theory calculations can yield reliable predictions of defect energy levels. The gallium antisite defect GaAs is often associated with the 78/203 meV shallow double acceptor in Ga-rich gallium arsenide. Within a conceptual framework of level patterns, analyses of structure and spin stabilization can be used within a supercell approach to distinguish localized deep defect states from shallow acceptors such as B As. Thismore » systematic approach determines that the gallium antisite supercell results has signatures inconsistent with an effective mass state and cannot be the 78/203 shallow double acceptor. Lastly, the properties of the Ga antisite in GaAs are described, total energy calculations that explicitly map onto asymptotic discrete localized bulk states predict that the Ga antisite is a deep double acceptor and has at least one deep donor state.« less
Brown, Richard J C; Wang, Jian; Tantra, Ratna; Yardley, Rachel E; Milton, Martin J T
2006-01-01
Despite widespread use for more than two decades, the SERS phenomenon has defied accurate physical and chemical explanation. The relative contributions from electronic and chemical mechanisms are difficult to quantify and are often not reproduced under nominally similar experimental conditions. This work has used electromagnetic modelling to predict the Raman enhancement expected from three configurations: metal nanoparticles, structured metal surfaces, and sharp metal tips interacting with metal surfaces. In each case, parameters such as artefact size, artefact separation and incident radiation wavelength have been varied and the resulting electromagnetic field modelled. This has yielded an electromagnetic description of these configurations with predictions of the maximum expected Raman enhancement, and hence a prediction of the optimum substrate configuration for the SERS process. When combined with experimental observations of the dependence of Raman enhancement with changing ionic strength, the modelling results have allowed a novel estimate of the size of the chemical enhancement mechanism to be produced.
Mehtiö, T; Rinne, M; Nyholm, L; Mäntysaari, P; Sairanen, A; Mäntysaari, E A; Pitkänen, T; Lidauer, M H
2016-04-01
This study was designed to obtain information on prediction of diet digestibility from near-infrared reflectance spectroscopy (NIRS) scans of faecal spot samples from dairy cows at different stages of lactation and to develop a faecal sampling protocol. NIRS was used to predict diet organic matter digestibility (OMD) and indigestible neutral detergent fibre content (iNDF) from faecal samples, and dry matter digestibility (DMD) using iNDF in feed and faecal samples as an internal marker. Acid-insoluble ash (AIA) as an internal digestibility marker was used as a reference method to evaluate the reliability of NIRS predictions. Feed and composite faecal samples were collected from 44 cows at approximately 50, 150 and 250 days in milk (DIM). The estimated standard deviation for cow-specific organic matter digestibility analysed by AIA was 12.3 g/kg, which is small considering that the average was 724 g/kg. The phenotypic correlation between direct faecal OMD prediction by NIRS and OMD by AIA over the lactation was 0.51. The low repeatability and small variability estimates for direct OMD predictions by NIRS were not accurate enough to quantify small differences in OMD between cows. In contrast to OMD, the repeatability estimates for DMD by iNDF and especially for direct faecal iNDF predictions were 0.32 and 0.46, respectively, indicating that developing of NIRS predictions for cow-specific digestibility is possible. A data subset of 20 cows with daily individual faecal samples was used to develop an on-farm sampling protocol. Based on the assessment of correlations between individual sample combinations and composite samples as well as repeatability estimates for individual sample combinations, we found that collecting up to three individual samples yields a representative composite sample. Collection of samples from all the cows of a herd every third month might be a good choice, because it would yield a better accuracy. © 2015 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Mathew, J.; Moat, R. J.; Paddea, S.; Francis, J. A.; Fitzpatrick, M. E.; Bouchard, P. J.
2017-12-01
Economic and safe management of nuclear plant components relies on accurate prediction of welding-induced residual stresses. In this study, the distribution of residual stress through the thickness of austenitic stainless steel welds has been measured using neutron diffraction and the contour method. The measured data are used to validate residual stress profiles predicted by an artificial neural network approach (ANN) as a function of welding heat input and geometry. Maximum tensile stresses with magnitude close to the yield strength of the material were observed near the weld cap in both axial and hoop direction of the welds. Significant scatter of more than 200 MPa was found within the residual stress measurements at the weld center line and are associated with the geometry and welding conditions of individual weld passes. The ANN prediction is developed in an attempt to effectively quantify this phenomenon of `innate scatter' and to learn the non-linear patterns in the weld residual stress profiles. Furthermore, the efficacy of the ANN method for defining through-thickness residual stress profiles in welds for application in structural integrity assessments is evaluated.
Fang, Yilin; Wilkins, Michael J; Yabusaki, Steven B; Lipton, Mary S; Long, Philip E
2012-12-01
Accurately predicting the interactions between microbial metabolism and the physical subsurface environment is necessary to enhance subsurface energy development, soil and groundwater cleanup, and carbon management. This study was an initial attempt to confirm the metabolic functional roles within an in silico model using environmental proteomic data collected during field experiments. Shotgun global proteomics data collected during a subsurface biostimulation experiment were used to validate a genome-scale metabolic model of Geobacter metallireducens-specifically, the ability of the metabolic model to predict metal reduction, biomass yield, and growth rate under dynamic field conditions. The constraint-based in silico model of G. metallireducens relates an annotated genome sequence to the physiological functions with 697 reactions controlled by 747 enzyme-coding genes. Proteomic analysis showed that 180 of the 637 G. metallireducens proteins detected during the 2008 experiment were associated with specific metabolic reactions in the in silico model. When the field-calibrated Fe(III) terminal electron acceptor process reaction in a reactive transport model for the field experiments was replaced with the genome-scale model, the model predicted that the largest metabolic fluxes through the in silico model reactions generally correspond to the highest abundances of proteins that catalyze those reactions. Central metabolism predicted by the model agrees well with protein abundance profiles inferred from proteomic analysis. Model discrepancies with the proteomic data, such as the relatively low abundances of proteins associated with amino acid transport and metabolism, revealed pathways or flux constraints in the in silico model that could be updated to more accurately predict metabolic processes that occur in the subsurface environment.
Wenger, Yvan; Galliot, Brigitte
2013-03-25
Evolutionary studies benefit from deep sequencing technologies that generate genomic and transcriptomic sequences from a variety of organisms. Genome sequencing and RNAseq have complementary strengths. In this study, we present the assembly of the most complete Hydra transcriptome to date along with a comparative analysis of the specific features of RNAseq and genome-predicted transcriptomes currently available in the freshwater hydrozoan Hydra vulgaris. To produce an accurate and extensive Hydra transcriptome, we combined Illumina and 454 Titanium reads, giving the primacy to Illumina over 454 reads to correct homopolymer errors. This strategy yielded an RNAseq transcriptome that contains 48'909 unique sequences including splice variants, representing approximately 24'450 distinct genes. Comparative analysis to the available genome-predicted transcriptomes identified 10'597 novel Hydra transcripts that encode 529 evolutionarily-conserved proteins. The annotation of 170 human orthologs points to critical functions in protein biosynthesis, FGF and TOR signaling, vesicle transport, immunity, cell cycle regulation, cell death, mitochondrial metabolism, transcription and chromatin regulation. However, a majority of these novel transcripts encodes short ORFs, at least 767 of them corresponding to pseudogenes. This RNAseq transcriptome also lacks 11'270 predicted transcripts that correspond either to silent genes or to genes expressed below the detection level of this study. We established a simple and powerful strategy to combine Illumina and 454 reads and we produced, with genome assistance, an extensive and accurate Hydra transcriptome. The comparative analysis of the RNAseq transcriptome with genome-predicted transcriptomes lead to the identification of large populations of novel as well as missing transcripts that might reflect Hydra-specific evolutionary events.
2013-01-01
Background Evolutionary studies benefit from deep sequencing technologies that generate genomic and transcriptomic sequences from a variety of organisms. Genome sequencing and RNAseq have complementary strengths. In this study, we present the assembly of the most complete Hydra transcriptome to date along with a comparative analysis of the specific features of RNAseq and genome-predicted transcriptomes currently available in the freshwater hydrozoan Hydra vulgaris. Results To produce an accurate and extensive Hydra transcriptome, we combined Illumina and 454 Titanium reads, giving the primacy to Illumina over 454 reads to correct homopolymer errors. This strategy yielded an RNAseq transcriptome that contains 48’909 unique sequences including splice variants, representing approximately 24’450 distinct genes. Comparative analysis to the available genome-predicted transcriptomes identified 10’597 novel Hydra transcripts that encode 529 evolutionarily-conserved proteins. The annotation of 170 human orthologs points to critical functions in protein biosynthesis, FGF and TOR signaling, vesicle transport, immunity, cell cycle regulation, cell death, mitochondrial metabolism, transcription and chromatin regulation. However, a majority of these novel transcripts encodes short ORFs, at least 767 of them corresponding to pseudogenes. This RNAseq transcriptome also lacks 11’270 predicted transcripts that correspond either to silent genes or to genes expressed below the detection level of this study. Conclusions We established a simple and powerful strategy to combine Illumina and 454 reads and we produced, with genome assistance, an extensive and accurate Hydra transcriptome. The comparative analysis of the RNAseq transcriptome with genome-predicted transcriptomes lead to the identification of large populations of novel as well as missing transcripts that might reflect Hydra-specific evolutionary events. PMID:23530871
Computational Material Processing in Microgravity
NASA Technical Reports Server (NTRS)
2005-01-01
Working with Professor David Matthiesen at Case Western Reserve University (CWRU) a computer model of the DPIMS (Diffusion Processes in Molten Semiconductors) space experiment was developed that is able to predict the thermal field, flow field and concentration profile within a molten germanium capillary under both ground-based and microgravity conditions as illustrated. These models are coupled with a novel nonlinear statistical methodology for estimating the diffusion coefficient from measured concentration values after a given time that yields a more accurate estimate than traditional methods. This code was integrated into a web-based application that has become a standard tool used by engineers in the Materials Science Department at CWRU.
A new approach for solving the three-dimensional steady Euler equations. I - General theory
NASA Technical Reports Server (NTRS)
Chang, S.-C.; Adamczyk, J. J.
1986-01-01
The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.
A new approach for solving the three-dimensional steady Euler equations. I - General theory
NASA Astrophysics Data System (ADS)
Chang, S.-C.; Adamczyk, J. J.
1986-08-01
The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.
NASA Astrophysics Data System (ADS)
Pearson, E.; Smith, M. W.; Klaar, M. J.; Brown, L. E.
2017-09-01
High resolution topographic surveys such as those provided by Structure-from-Motion (SfM) contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-metre scale topographic variability (or 'surface roughness') to sediment grain size by deriving empirical relationships between the two. In fluvial applications, such relationships permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing improved data to drive three-dimensional hydraulic models, allowing rapid geomorphic monitoring of sub-reach river restoration projects, and enabling more robust characterisation of riverbed habitats. However, comparison of previously published roughness-grain-size relationships shows substantial variability between field sites. Using a combination of over 300 laboratory and field-based SfM surveys, we demonstrate the influence of inherent survey error, irregularity of natural gravels, particle shape, grain packing structure, sorting, and form roughness on roughness-grain-size relationships. Roughness analysis from SfM datasets can accurately predict the diameter of smooth hemispheres, though natural, irregular gravels result in a higher roughness value for a given diameter and different grain shapes yield different relationships. A suite of empirical relationships is presented as a decision tree which improves predictions of grain size. By accounting for differences in patch facies, large improvements in D50 prediction are possible. SfM is capable of providing accurate grain size estimates, although further refinement is needed for poorly sorted gravel patches, for which c-axis percentiles are better predicted than b-axis percentiles.
How does spatial and temporal resolution of vegetation index impact crop yield estimation?
USDA-ARS?s Scientific Manuscript database
Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing data have long been used in crop yield estimation for decades. The process-based approach uses light use efficiency model to estimate crop yield. Vegetation index (VI) ...
Modeling water yield response to forest cover changes in northern Minnesota
S.C. Bernath; E.S. Verry; K.N. Brooks; P.F. Ffolliott
1982-01-01
A water yield model (TIMWAT) has been developed to predict changes in water yield following changes in forest cover in northern Minnesota. Two versions of the model exist; one predicts changes in water yield as a function of gross precipitation and time after clearcutting. The second version predicts changes in water yield due to changes in above-ground biomass...
NASA Astrophysics Data System (ADS)
Lu, Y.
2017-12-01
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of earth's croplands. As such, it plays an important role in soil carbon balance, and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under changing climate, but also for understanding the energy and water cycles for winter wheat dominated regions. A winter wheat growth model has been developed in the Community Land Model 4.5 (CLM4.5), but its responses to irrigation and nitrogen fertilization have not been validated. In this study, I will validate winter wheat growth response to irrigation and nitrogen fertilization at five winter wheat field sites (TXLU, KSMA, NESA, NDMA, and ABLE) in North America, which were originally designed to understand winter wheat response to nitrogen fertilization and water treatments (4 nitrogen levels and 3 irrigation regimes). I also plan to further update the linkages between winter wheat yield and cold hazards. The previous cold damage function only indirectly affects yield through reduction on leaf area index (LAI) and hence photosynthesis, such approach could sometimes produce an unwanted higher yield when the reduced LAI saved more nutrient in the grain fill stage.
Antonios, Tarek F T; Nama, Vivek; Wang, Duolao; Manyonda, Isaac T
2013-09-01
Preeclampsia is a major cause of maternal and neonatal mortality and morbidity. The incidence of preeclampsia seems to be rising because of increased prevalence of predisposing disorders, such as essential hypertension, diabetes, and obesity, and there is increasing evidence to suggest widespread microcirculatory abnormalities before the onset of preeclampsia. We hypothesized that quantifying capillary rarefaction could be helpful in the clinical prediction of preeclampsia. We measured skin capillary density according to a well-validated protocol at 5 consecutive predetermined visits in 322 consecutive white women, of whom 16 subjects developed preeclampsia. We found that structural capillary rarefaction at 20-24 weeks of gestation yielded a sensitivity of 0.87 with a specificity of 0.50 at the cutoff of 2 capillaries/field with the area under the curve of the receiver operating characteristic value of 0.70, whereas capillary rarefaction at 27-32 weeks of gestation yielded a sensitivity of 0.75 and a higher specificity of 0.77 at the cutoff of 8 capillaries/field with area under the curve of the receiver operating characteristic value of 0.82. Combining capillary rarefaction with uterine artery Doppler pulsatility index increased the sensitivity and specificity of the prediction. Multivariable analysis shows that the odds of preeclampsia are increased in women with previous history of preeclampsia or chronic hypertension and in those with increased uterine artery Doppler pulsatility index, but the most powerful and independent predictor of preeclampsia was capillary rarefaction at 27-32 weeks. Quantifying structural rarefaction of skin capillaries in pregnancy is a potentially useful clinical marker for the prediction of preeclampsia.
Congdon, B S; Coutts, B A; Jones, R A C; Renton, M
2017-09-15
An empirical model was developed to forecast Pea seed-borne mosaic virus (PSbMV) incidence at a critical phase of the annual growing season to predict yield loss in field pea crops sown under Mediterranean-type conditions. The model uses pre-growing season rainfall to calculate an index of aphid abundance in early-August which, in combination with PSbMV infection level in seed sown, is used to forecast virus crop incidence. Using predicted PSbMV crop incidence in early-August and day of sowing, PSbMV transmission from harvested seed was also predicted, albeit less accurately. The model was developed so it provides forecasts before sowing to allow sufficient time to implement control recommendations, such as having representative seed samples tested for PSbMV transmission rate to seedlings, obtaining seed with minimal PSbMV infection or of a PSbMV-resistant cultivar, and implementation of cultural management strategies. The model provides a disease forecast risk indication, taking into account predicted percentage yield loss to PSbMV infection and economic factors involved in field pea production. This disease risk forecast delivers location-specific recommendations regarding PSbMV management to end-users. These recommendations will be delivered directly to end-users via SMS alerts with links to web support that provide information on PSbMV management options. This modelling and decision support system approach would likely be suitable for use in other world regions where field pea is grown in similar Mediterranean-type environments. Copyright © 2017 Elsevier B.V. All rights reserved.
Turning maneuvers in sharks: Predicting body curvature from axial morphology.
Porter, Marianne E; Roque, Cassandra M; Long, John H
2009-08-01
Given the diversity of vertebral morphologies among fishes, it is tempting to propose causal links between axial morphology and body curvature. We propose that shape and size of the vertebrae, intervertebral joints, and the body will more accurately predict differences in body curvature during swimming rather than a single meristic such as total vertebral number alone. We examined the correlation between morphological features and maximum body curvature seen during routine turns in five species of shark: Triakis semifasciata, Heterodontus francisci, Chiloscyllium plagiosum, Chiloscyllium punctatum, and Hemiscyllium ocellatum. We quantified overall body curvature using three different metrics. From a separate group of size-matched individuals, we measured 16 morphological features from precaudal vertebrae and the body. As predicted, a larger pool of morphological features yielded a more robust prediction of maximal body curvature than vertebral number alone. Stepwise linear regression showed that up to 11 features were significant predictors of the three measures of body curvature, yielding highly significant multiple regressions with r(2) values of 0.523, 0.537, and 0.584. The second moment of area of the centrum was always the best predictor, followed by either centrum length or transverse height. Ranking as the fifth most important variable in three different models, the body's total length, fineness ratio, and width were the most important non-vertebral morphologies. Without considering the effects of muscle activity, these correlations suggest a dominant role for the vertebral column in providing the passive mechanical properties of the body that control, in part, body curvature during swimming. (c) 2009 Wiley-Liss, Inc.
MISSE 2 PEACE Polymers Experiment Atomic Oxygen Erosion Yield Error Analysis
NASA Technical Reports Server (NTRS)
McCarthy, Catherine E.; Banks, Bruce A.; deGroh, Kim, K.
2010-01-01
Atomic oxygen erosion of polymers in low Earth orbit (LEO) poses a serious threat to spacecraft performance and durability. To address this, 40 different polymer samples and a sample of pyrolytic graphite, collectively called the PEACE (Polymer Erosion and Contamination Experiment) Polymers, were exposed to the LEO space environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of the Materials International Space Station Experiment 1 & 2 (MISSE 1 & 2). The purpose of the PEACE Polymers experiment was to obtain accurate mass loss measurements in space to combine with ground measurements in order to accurately calculate the atomic oxygen erosion yields of a wide variety of polymeric materials exposed to the LEO space environment for a long period of time. Error calculations were performed in order to determine the accuracy of the mass measurements and therefore of the erosion yield values. The standard deviation, or error, of each factor was incorporated into the fractional uncertainty of the erosion yield for each of three different situations, depending on the post-flight weighing procedure. The resulting error calculations showed the erosion yield values to be very accurate, with an average error of 3.30 percent.
Flowfield Comparisons from Three Navier-Stokes Solvers for an Axisymmetric Separate Flow Jet
NASA Technical Reports Server (NTRS)
Koch, L. Danielle; Bridges, James; Khavaran, Abbas
2002-01-01
To meet new noise reduction goals, many concepts to enhance mixing in the exhaust jets of turbofan engines are being studied. Accurate steady state flowfield predictions from state-of-the-art computational fluid dynamics (CFD) solvers are needed as input to the latest noise prediction codes. The main intent of this paper was to ascertain that similar Navier-Stokes solvers run at different sites would yield comparable results for an axisymmetric two-stream nozzle case. Predictions from the WIND and the NPARC codes are compared to previously reported experimental data and results from the CRAFT Navier-Stokes solver. Similar k-epsilon turbulence models were employed in each solver, and identical computational grids were used. Agreement between experimental data and predictions from each code was generally good for mean values. All three codes underpredict the maximum value of turbulent kinetic energy. The predicted locations of the maximum turbulent kinetic energy were farther downstream than seen in the data. A grid study was conducted using the WIND code, and comments about convergence criteria and grid requirements for CFD solutions to be used as input for noise prediction computations are given. Additionally, noise predictions from the MGBK code, using the CFD results from the CRAFT code, NPARC, and WIND as input are compared to data.
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.
2007-01-01
Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385
Zhang, Zhongrui; Zhong, Quanlin; Niklas, Karl J; Cai, Liang; Yang, Yusheng; Cheng, Dongliang
2016-08-24
Metabolic scaling theory (MST) posits that the scaling exponents among plant height H, diameter D, and biomass M will covary across phyletically diverse species. However, the relationships between scaling exponents and normalization constants remain unclear. Therefore, we developed a predictive model for the covariation of H, D, and stem volume V scaling relationships and used data from Chinese fir (Cunninghamia lanceolata) in Jiangxi province, China to test it. As predicted by the model and supported by the data, normalization constants are positively correlated with their associated scaling exponents for D vs. V and H vs. V, whereas normalization constants are negatively correlated with the scaling exponents of H vs. D. The prediction model also yielded reliable estimations of V (mean absolute percentage error = 10.5 ± 0.32 SE across 12 model calibrated sites). These results (1) support a totally new covariation scaling model, (2) indicate that differences in stem volume scaling relationships at the intra-specific level are driven by anatomical or ecophysiological responses to site quality and/or management practices, and (3) provide an accurate non-destructive method for predicting Chinese fir stem volume.
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
Kostal, Jakub; Voutchkova-Kostal, Adelina
2016-01-19
Using computer models to accurately predict toxicity outcomes is considered to be a major challenge. However, state-of-the-art computational chemistry techniques can now be incorporated in predictive models, supported by advances in mechanistic toxicology and the exponential growth of computing resources witnessed over the past decade. The CADRE (Computer-Aided Discovery and REdesign) platform relies on quantum-mechanical modeling of molecular interactions that represent key biochemical triggers in toxicity pathways. Here, we present an external validation exercise for CADRE-SS, a variant developed to predict the skin sensitization potential of commercial chemicals. CADRE-SS is a hybrid model that evaluates skin permeability using Monte Carlo simulations, assigns reactive centers in a molecule and possible biotransformations via expert rules, and determines reactivity with skin proteins via quantum-mechanical modeling. The results were promising with an overall very good concordance of 93% between experimental and predicted values. Comparison to performance metrics yielded by other tools available for this endpoint suggests that CADRE-SS offers distinct advantages for first-round screenings of chemicals and could be used as an in silico alternative to animal tests where permissible by legislative programs.
NASA Astrophysics Data System (ADS)
Papadavid, G.; Hadjimitsis, D.
2014-08-01
Remote sensing techniques development have provided the opportunity for optimizing yields in the agricultural procedure and moreover to predict the forthcoming yield. Yield prediction plays a vital role in Agricultural Policy and provides useful data to policy makers. In this context, crop and soil parameters along with NDVI index which are valuable sources of information have been elaborated statistically to test if a) Durum wheat yield can be predicted and b) when is the actual time-window to predict the yield in the district of Paphos, where Durum wheat is the basic cultivation and supports the rural economy of the area. 15 plots cultivated with Durum wheat from the Agricultural Research Institute of Cyprus for research purposes, in the area of interest, have been under observation for three years to derive the necessary data. Statistical and remote sensing techniques were then applied to derive and map a model that can predict yield of Durum wheat in this area. Indeed the semi-empirical model developed for this purpose, with very high correlation coefficient R2=0.886, has shown in practice that can predict yields very good. Students T test has revealed that predicted values and real values of yield have no statistically significant difference. The developed model can and will be further elaborated with more parameters and applied for other crops in the near future.
Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma
Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.
2013-01-01
Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eos
POPISK: T-cell reactivity prediction using support vector machines and string kernels
2011-01-01
Background Accurate prediction of peptide immunogenicity and characterization of relation between peptide sequences and peptide immunogenicity will be greatly helpful for vaccine designs and understanding of the immune system. In contrast to the prediction of antigen processing and presentation pathway, the prediction of subsequent T-cell reactivity is a much harder topic. Previous studies of identifying T-cell receptor (TCR) recognition positions were based on small-scale analyses using only a few peptides and concluded different recognition positions such as positions 4, 6 and 8 of peptides with length 9. Large-scale analyses are necessary to better characterize the effect of peptide sequence variations on T-cell reactivity and design predictors of a peptide's T-cell reactivity (and thus immunogenicity). The identification and characterization of important positions influencing T-cell reactivity will provide insights into the underlying mechanism of immunogenicity. Results This work establishes a large dataset by collecting immunogenicity data from three major immunology databases. In order to consider the effect of MHC restriction, peptides are classified by their associated MHC alleles. Subsequently, a computational method (named POPISK) using support vector machine with a weighted degree string kernel is proposed to predict T-cell reactivity and identify important recognition positions. POPISK yields a mean 10-fold cross-validation accuracy of 68% in predicting T-cell reactivity of HLA-A2-binding peptides. POPISK is capable of predicting immunogenicity with scores that can also correctly predict the change in T-cell reactivity related to point mutations in epitopes reported in previous studies using crystal structures. Thorough analyses of the prediction results identify the important positions 4, 6, 8 and 9, and yield insights into the molecular basis for TCR recognition. Finally, we relate this finding to physicochemical properties and structural features of the MHC-peptide-TCR interaction. Conclusions A computational method POPISK is proposed to predict immunogenicity with scores which are useful for predicting immunogenicity changes made by single-residue modifications. The web server of POPISK is freely available at http://iclab.life.nctu.edu.tw/POPISK. PMID:22085524
Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K
2018-06-09
The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
POPISK: T-cell reactivity prediction using support vector machines and string kernels.
Tung, Chun-Wei; Ziehm, Matthias; Kämper, Andreas; Kohlbacher, Oliver; Ho, Shinn-Ying
2011-11-15
Accurate prediction of peptide immunogenicity and characterization of relation between peptide sequences and peptide immunogenicity will be greatly helpful for vaccine designs and understanding of the immune system. In contrast to the prediction of antigen processing and presentation pathway, the prediction of subsequent T-cell reactivity is a much harder topic. Previous studies of identifying T-cell receptor (TCR) recognition positions were based on small-scale analyses using only a few peptides and concluded different recognition positions such as positions 4, 6 and 8 of peptides with length 9. Large-scale analyses are necessary to better characterize the effect of peptide sequence variations on T-cell reactivity and design predictors of a peptide's T-cell reactivity (and thus immunogenicity). The identification and characterization of important positions influencing T-cell reactivity will provide insights into the underlying mechanism of immunogenicity. This work establishes a large dataset by collecting immunogenicity data from three major immunology databases. In order to consider the effect of MHC restriction, peptides are classified by their associated MHC alleles. Subsequently, a computational method (named POPISK) using support vector machine with a weighted degree string kernel is proposed to predict T-cell reactivity and identify important recognition positions. POPISK yields a mean 10-fold cross-validation accuracy of 68% in predicting T-cell reactivity of HLA-A2-binding peptides. POPISK is capable of predicting immunogenicity with scores that can also correctly predict the change in T-cell reactivity related to point mutations in epitopes reported in previous studies using crystal structures. Thorough analyses of the prediction results identify the important positions 4, 6, 8 and 9, and yield insights into the molecular basis for TCR recognition. Finally, we relate this finding to physicochemical properties and structural features of the MHC-peptide-TCR interaction. A computational method POPISK is proposed to predict immunogenicity with scores which are useful for predicting immunogenicity changes made by single-residue modifications. The web server of POPISK is freely available at http://iclab.life.nctu.edu.tw/POPISK.
Empirical yield tables for spruce-fir cut-over lands in the Northeast
Marinus Westveld
1953-01-01
Predicting future timber yields is an unavoidable task for the forest manager who is interested in growing timber as a long-term investment. He must predict yields as a basis for formulating management plans and policies. And he must predict yields from lands that differ greatly in productivity.
A Machine Learns to Predict the Stability of Tightly Packed Planetary Systems
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Silburt, Ari; Valencia, Diana; Menou, Kristen; Ali-Dib, Mohamad; Petrovich, Cristobal; Huang, Chelsea X.; Rein, Hanno; van Laerhoven, Christa; Paradise, Adiv; Obertas, Alysa; Murray, Norman
2016-12-01
The requirement that planetary systems be dynamically stable is often used to vet new discoveries or set limits on unconstrained masses or orbital elements. This is typically carried out via computationally expensive N-body simulations. We show that characterizing the complicated and multi-dimensional stability boundary of tightly packed systems is amenable to machine-learning methods. We find that training an XGBoost machine-learning algorithm on physically motivated features yields an accurate classifier of stability in packed systems. On the stability timescale investigated (107 orbits), it is three orders of magnitude faster than direct N-body simulations. Optimized machine-learning classifiers for dynamical stability may thus prove useful across the discipline, e.g., to characterize the exoplanet sample discovered by the upcoming Transiting Exoplanet Survey Satellite. This proof of concept motivates investing computational resources to train algorithms capable of predicting stability over longer timescales and over broader regions of phase space.
Adoption of multivariate copulae in prognostication of economic growth by means of interest rate
NASA Astrophysics Data System (ADS)
Saputra, Dewi Tanasia; Indratno, Sapto Wahyu, Dr.
2015-12-01
Inflation, at a healthy rate, is a sign of growing economy. Nonetheless, when inflation rate grows uncontrollably, it will negatively influence economic growth. Many tackle this problem by increasing interest rate to help protecting the value of money which is detained by inflation. There are few, however, who study the effects of interest rate in economic growth. The main purposes of this paper are to find how the change of interest rate affects economic growth and to use the relationship in prognostication of economic growth. By using expenditure model, a linear relationship between economic growth and interest rate is developed. The result is then used for prediction by normal copula and Vine Archimedean copula. It is shown that increasing interest rate to tackle inflation is a poor solution. Whereas implementation of copula in predicting economic growth yields an accurate result, with not more than 0.5% difference.
Lessons Learned from the Wide Field Camera 3 TV1 Test Campaign and Correlation Effort
NASA Technical Reports Server (NTRS)
Peabody, Hume; Stavley, Richard; Bast, William
2007-01-01
In January 2004, shortly after the Columbia accident, future servicing missions to the Hubble Space Telescope (HST) were cancelled. In response to this, further work on the Wide Field Camera 3 instrument was ceased. Given the maturity level of the design, a characterization thermal test (TV1) was completed in case the mission was re-instated or an alternate mission found on which to fly the instrument. This thermal test yielded some valuable lessons learned with respect to testing configurations and modeling/correlation practices, including: 1. Ensure that the thermal design can be tested 2. Ensure that the model has sufficient detail for accurate predictions 3. Ensure that the power associated with all active control devices is predicted 4. Avoid unit changes for existing models. This paper documents the difficulties presented when these recommendations were not followed.
A comprehensive method for preliminary design optimization of axial gas turbine stages
NASA Technical Reports Server (NTRS)
Jenkins, R. M.
1982-01-01
A method is presented that performs a rapid, reasonably accurate preliminary pitchline optimization of axial gas turbine annular flowpath geometry, as well as an initial estimate of blade profile shapes, given only a minimum of thermodynamic cycle requirements. No geometric parameters need be specified. The following preliminary design data are determined: (1) the optimum flowpath geometry, within mechanical stress limits; (2) initial estimates of cascade blade shapes; (3) predictions of expected turbine performance. The method uses an inverse calculation technique whereby blade profiles are generated by designing channels to yield a specified velocity distribution on the two walls. Velocity distributions are then used to calculate the cascade loss parameters. Calculated blade shapes are used primarily to determine whether the assumed velocity loadings are physically realistic. Model verification is accomplished by comparison of predicted turbine geometry and performance with four existing single stage turbines.
Dam, Jan S; Yavari, Nazila; Sørensen, Søren; Andersson-Engels, Stefan
2005-07-10
We present a fast and accurate method for real-time determination of the absorption coefficient, the scattering coefficient, and the anisotropy factor of thin turbid samples by using simple continuous-wave noncoherent light sources. The three optical properties are extracted from recordings of angularly resolved transmittance in addition to spatially resolved diffuse reflectance and transmittance. The applied multivariate calibration and prediction techniques are based on multiple polynomial regression in combination with a Newton--Raphson algorithm. The numerical test results based on Monte Carlo simulations showed mean prediction errors of approximately 0.5% for all three optical properties within ranges typical for biological media. Preliminary experimental results are also presented yielding errors of approximately 5%. Thus the presented methods show a substantial potential for simultaneous absorption and scattering characterization of turbid media.
Parent-based diagnosis of ADHD is as accurate as a teacher-based diagnosis of ADHD.
Bied, Adam; Biederman, Joseph; Faraone, Stephen
2017-04-01
To review the literature evaluating the psychometric properties of parent and teacher informants relative to a gold-standard ADHD diagnosis in pediatric populations. We included studies that included both a parent and teacher informant, a gold-standard diagnosis, and diagnostic accuracy metrics. Potential confounds were evaluated. We also assessed the 'OR' and the 'AND' rules for combining informant reports. Eight articles met inclusion criteria. The diagnostic accuracy for predicting gold standard ADHD diagnoses did not differ between parents and teachers. Sample size, sample type, participant drop-out, participant age, participant gender, geographic area of the study, and date of study publication were assessed as potential confounds. Parent and teachers both yielded moderate to good diagnostic accuracy for ADHD diagnoses. Parent reports were statistically indistinguishable from those of teachers. The predictive features of the 'OR' and 'AND' rules are useful in evaluating approaches to better integrating information from these informants.
[Effects of Chemical Fertilizers and Organic Fertilizer on Yield of Ligusticum chuanxiong Rhizome].
Liang, Qin; Chen, Xing-fu; Li, Yan; Zhang, Jun; Meng, Jie; Peng, Shi-ming
2015-10-01
To study the effects of different N, P, K and organic fertilizer (OF) on yield of Ligusticum chuanxiong rhizome, in order to provide the theoretical foundation for the establishment of standardization cultivation techniques. The field plot experiments used Ligusticum chuanxiong rhizome which planted in Pengshan as material, and were studied by the four factors and five levels with quadratic regression rotation-orthogonal combination design. According to the data obtained, a function model which could predict the fertilization and yield of Ligusticum chuanxiong rhizome accurately was established. The model analysis showed that the yields of Ligusticum chuanxiong rhizome were significantly influenced by the N, P, K and OF applications. Among these factors, the order of increase rates by the fertilizers was K > OF > N > P; The effect of interaction between N and K, N and OF, K and OF on the yield of Ligusticum chuanxiong rhizome were significantly different. High levels of N and P, N and organic fertilizer, K and organic fertilizer were conducive to improve the yield of Ligusticum chuanxiong rhizome. The results showed that the optimal fertilizer application rates of N was 148.20 - 172.28 kg/hm2, P was 511.92 - 599.40 kg/hm2, K was 249.70 - 282.37 kg/hm2, and OF was 940.00 - 1 104.00 kg/hm2. N, P, K and OF obviously affect the yield of Ligusticum chuanxiong rhizome. K and OF can significantly increase the yield of Ligusticum chuanxiong rhizome. Thus it is suggested that properly high mount of K and OF and appropriate increasing N are two favorable factors for cultivating Ligusticum chuanxiong.
Forecasting space weather: Can new econometric methods improve accuracy?
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2011-06-01
Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
A Common Core for Active Conceptual Modeling for Learning from Surprises
NASA Astrophysics Data System (ADS)
Liddle, Stephen W.; Embley, David W.
The new field of active conceptual modeling for learning from surprises (ACM-L) may be helpful in preserving life, protecting property, and improving quality of life. The conceptual modeling community has developed sound theory and practices for conceptual modeling that, if properly applied, could help analysts model and predict more accurately. In particular, we need to associate more semantics with links, and we need fully reified high-level objects and relationships that have a clear, formal underlying semantics that follows a natural, ontological approach. We also need to capture more dynamic aspects in our conceptual models to more accurately model complex, dynamic systems. These concepts already exist, and the theory is well developed; what remains is to link them with the ideas needed to predict system evolution, thus enabling risk assessment and response planning. No single researcher or research group will be able to achieve this ambitious vision alone. As a starting point, we recommend that the nascent ACM-L community agree on a common core model that supports all aspects—static and dynamic—needed for active conceptual modeling in support of learning from surprises. A common core will more likely gain the traction needed to sustain the extended ACM-L research effort that will yield the advertised benefits of learning from surprises.
Genomic evaluation of regional dairy cattle breeds in single-breed and multibreed contexts.
Jónás, D; Ducrocq, V; Fritz, S; Baur, A; Sanchez, M-P; Croiseau, P
2017-02-01
An important prerequisite for high prediction accuracy in genomic prediction is the availability of a large training population, which allows accurate marker effect estimation. This requirement is not fulfilled in case of regional breeds with a limited number of breeding animals. We assessed the efficiency of the current French routine genomic evaluation procedure in four regional breeds (Abondance, Tarentaise, French Simmental and Vosgienne) as well as the potential benefits when the training populations consisting of males and females of these breeds are merged to form a multibreed training population. Genomic evaluation was 5-11% more accurate than a pedigree-based BLUP in three of the four breeds, while the numerically smallest breed showed a < 1% increase in accuracy. Multibreed genomic evaluation was beneficial for two breeds (Abondance and French Simmental) with maximum gains of 5 and 8% in correlation coefficients between yield deviations and genomic estimated breeding values, when compared to the single-breed genomic evaluation results. Inflation of genomic evaluation of young candidates was also reduced. Our results indicate that genomic selection can be effective in regional breeds as well. Here, we provide empirical evidence proving that genetic distance between breeds is only one of the factors affecting the efficiency of multibreed genomic evaluation. © 2016 Blackwell Verlag GmbH.
Dama, Elisa; Tillhon, Micol; Bertalot, Giovanni; de Santis, Francesca; Troglio, Flavia; Pessina, Simona; Passaro, Antonio; Pece, Salvatore; de Marinis, Filippo; Dell'Orto, Patrizia; Viale, Giuseppe; Spaggiari, Lorenzo; Di Fiore, Pier Paolo; Bianchi, Fabrizio; Barberis, Massimo; Vecchi, Manuela
2016-01-01
Accurate detection of altered anaplastic lymphoma kinase (ALK) expression is critical for the selection of lung cancer patients eligible for ALK-targeted therapies. To overcome intrinsic limitations and discrepancies of currently available companion diagnostics for ALK, we developed a simple, affordable and objective PCR-based predictive model for the quantitative measurement of any ALK fusion as well as wild-type ALK upregulation. This method, optimized for low-quantity/−quality RNA from FFPE samples, combines cDNA pre-amplification with ad hoc generated calibration curves. All the models we derived yielded concordant predictions when applied to a cohort of 51 lung tumors, and correctly identified all 17 ALK FISH-positive and 33 of the 34 ALK FISH-negative samples. The one discrepant case was confirmed as positive by IHC, thus raising the accuracy of our test to 100%. Importantly, our method was accurate when using low amounts of input RNA (10 ng), also in FFPE samples with limited tumor cellularity (5–10%) and in FFPE cytology specimens. Thus, our test is an easily implementable diagnostic tool for the rapid, efficacious and cost-effective screening of ALK status in patients with lung cancer. PMID:27206799
Karalis, Konstantinos T.; Dellis, Dimitrios; Antipas, Georgios S. E.; Xenidis, Anthimos
2016-01-01
The thermodynamics, structural and transport properties (density, melting point, heat capacity, thermal expansion coefficient, viscosity and electrical conductivity) of a ferro-aluminosilicate slag have been studied in the solid and liquid state (1273–2273 K) using molecular dynamics. The simulations were based on a Buckingham-type potential, which was extended here, to account for the presence of Cr and Cu. The potential was optimized by fitting pair distribution function partials to values determined by Reverse Monte Carlo modelling of X-ray and neutron diffraction experiments. The resulting short range order features and ring statistics were in tight agreement with experimental data and created consensus for the accurate prediction of transport properties. Accordingly, calculations yielded rational values both for the average heat capacity, equal to 1668.58 J/(kg·K), and for the viscosity, in the range of 4.09–87.64 cP. The potential was consistent in predicting accurate values for mass density (i.e. 2961.50 kg/m3 vs. an experimental value of 2940 kg/m3) and for electrical conductivity (5.3–233 S/m within a temperature range of 1273.15–2273.15 K). PMID:27455915
Development of an accident duration prediction model on the Korean Freeway Systems.
Chung, Younshik
2010-01-01
Since duration prediction is one of the most important steps in an accident management process, there have been several approaches developed for modeling accident duration. This paper presents a model for the purpose of accident duration prediction based on accurately recorded and large accident dataset from the Korean Freeway Systems. To develop the duration prediction model, this study utilizes the log-logistic accelerated failure time (AFT) metric model and a 2-year accident duration dataset from 2006 to 2007. Specifically, the 2006 dataset is utilized to develop the prediction model and then, the 2007 dataset was employed to test the temporal transferability of the 2006 model. Although the duration prediction model has limitations such as large prediction error due to the individual differences of the accident treatment teams in terms of clearing similar accidents, the results from the 2006 model yielded a reasonable prediction based on the mean absolute percentage error (MAPE) scale. Additionally, the results of the statistical test for temporal transferability indicated that the estimated parameters in the duration prediction model are stable over time. Thus, this temporal stability suggests that the model may have potential to be used as a basis for making rational diversion and dispatching decisions in the event of an accident. Ultimately, such information will beneficially help in mitigating traffic congestion due to accidents.
Multimodel predictive system for carbon dioxide solubility in saline formation waters.
Wang, Zan; Small, Mitchell J; Karamalidis, Athanasios K
2013-02-05
The prediction of carbon dioxide solubility in brine at conditions relevant to carbon sequestration (i.e., high temperature, pressure, and salt concentration (T-P-X)) is crucial when this technology is applied. Eleven mathematical models for predicting CO(2) solubility in brine are compared and considered for inclusion in a multimodel predictive system. Model goodness of fit is evaluated over the temperature range 304-433 K, pressure range 74-500 bar, and salt concentration range 0-7 m (NaCl equivalent), using 173 published CO(2) solubility measurements, particularly selected for those conditions. The performance of each model is assessed using various statistical methods, including the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Different models emerge as best fits for different subranges of the input conditions. A classification tree is generated using machine learning methods to predict the best-performing model under different T-P-X subranges, allowing development of a multimodel predictive system (MMoPS) that selects and applies the model expected to yield the most accurate CO(2) solubility prediction. Statistical analysis of the MMoPS predictions, including a stratified 5-fold cross validation, shows that MMoPS outperforms each individual model and increases the overall accuracy of CO(2) solubility prediction across the range of T-P-X conditions likely to be encountered in carbon sequestration applications.
Walter, Jonathan P; Pandy, Marcus G
2017-10-01
The aim of this study was to perform multi-body, muscle-driven, forward-dynamics simulations of human gait using a 6-degree-of-freedom (6-DOF) model of the knee in tandem with a surrogate model of articular contact and force control. A forward-dynamics simulation incorporating position, velocity and contact force-feedback control (FFC) was used to track full-body motion capture data recorded for multiple trials of level walking and stair descent performed by two individuals with instrumented knee implants. Tibiofemoral contact force errors for FFC were compared against those obtained from a standard computed muscle control algorithm (CMC) with a 6-DOF knee contact model (CMC6); CMC with a 1-DOF translating hinge-knee model (CMC1); and static optimization with a 1-DOF translating hinge-knee model (SO). Tibiofemoral joint loads predicted by FFC and CMC6 were comparable for level walking, however FFC produced more accurate results for stair descent. SO yielded reasonable predictions of joint contact loading for level walking but significant differences between model and experiment were observed for stair descent. CMC1 produced the least accurate predictions of tibiofemoral contact loads for both tasks. Our findings suggest that reliable estimates of knee-joint loading may be obtained by incorporating position, velocity and force-feedback control with a multi-DOF model of joint contact in a forward-dynamics simulation of gait. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dierickx, Marion I. P.; Loeb, Abraham, E-mail: mdierickx@cfa.harvard.edu, E-mail: aloeb@cfa.harvard.edu
The extensive span of the Sagittarius (Sgr) stream makes it a promising tool for studying the gravitational potential of the Milky Way (MW). Characterizing its stellar kinematics can constrain halo properties and provide a benchmark for the paradigm of galaxy formation from cold dark matter. Accurate models of the disruption dynamics of the Sgr progenitor are necessary to employ this tool. Using a combination of analytic modeling and N -body simulations, we build a new model of the Sgr orbit and resulting stellar stream. In contrast to previous models, we simulate the full infall trajectory of the Sgr progenitor frommore » the time it first crossed the MW virial radius 8 Gyr ago. An exploration of the parameter space of initial phase-space conditions yields tight constraints on the angular momentum of the Sgr progenitor. Our best-fit model is the first to accurately reproduce existing data on the 3D positions and radial velocities of the debris detected 100 kpc away in the MW halo. In addition to replicating the mapped stream, the simulation also predicts the existence of several arms of the Sgr stream extending to hundreds of kiloparsecs. The two most distant stars known in the MW halo coincide with the predicted structure. Additional stars in the newly predicted arms can be found with future data from the Large Synoptic Survey Telescope. Detecting a statistical sample of stars in the most distant Sgr arms would provide an opportunity to constrain the MW potential out to unprecedented Galactocentric radii.« less
NASA Astrophysics Data System (ADS)
Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda
2009-02-01
The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Genomic selection across multiple breeding cycles in applied bread wheat breeding.
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2016-06-01
We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Shen, Zhitao; Ma, Haitao; Zhang, Chunfang; Fu, Mingkai; Wu, Yanan; Bian, Wensheng; Cao, Jianwei
2017-01-01
Encouraged by recent advances in revealing significant effects of van der Waals wells on reaction dynamics, many people assume that van der Waals wells are inevitable in chemical reactions. Here we find that the weak long-range forces cause van der Waals saddles in the prototypical C(1D)+D2 complex-forming reaction that have very different dynamical effects from van der Waals wells at low collision energies. Accurate quantum dynamics calculations on our highly accurate ab initio potential energy surfaces with van der Waals saddles yield cross-sections in close agreement with crossed-beam experiments, whereas the same calculations on an earlier surface with van der Waals wells produce much smaller cross-sections at low energies. Further trajectory calculations reveal that the van der Waals saddle leads to a torsion then sideways insertion reaction mechanism, whereas the well suppresses reactivity. Quantum diffraction oscillations and sharp resonances are also predicted based on our ground- and excited-state potential energy surfaces. PMID:28094253
Weiss, M; Stedtler, C; Roberts, M S
1997-09-01
The dispersion model with mixed boundary conditions uses a single parameter, the dispersion number, to describe the hepatic elimination of xenobiotics and endogenous substances. An implicit a priori assumption of the model is that the transit time density of intravascular indicators is approximately by an inverse Gaussian distribution. This approximation is limited in that the model poorly describes the tail part of the hepatic outflow curves of vascular indicators. A sum of two inverse Gaussian functions is proposed as an alternative, more flexible empirical model for transit time densities of vascular references. This model suggests that a more accurate description of the tail portion of vascular reference curves yields an elimination rate constant (or intrinsic clearance) which is 40% less than predicted by the dispersion model with mixed boundary conditions. The results emphasize the need to accurately describe outflow curves in using them as a basis for determining pharmacokinetic parameters using hepatic elimination models.
Physical processes in directed ion beam sputtering. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Robinson, R. S.
1979-01-01
The general operation of a discharge chamber for the production of ions is described. A model is presented for the magnetic containment of both primary and secondary or Maxwellian electrons in the discharge plasma. Cross sections were calculated for energy and momentum transfer in binary collisions between like pairs of Ar, Kr, and Xe atoms in the energy range from about 1 eV to 1000 eV. These calculations were made from available pair interaction potentials using a classical model. Experimental data from the literature were fit to a theoretical expression for the Ar resonance charge exchange cross section over the same energy range. A model was developed that describes the processes of conical texturing of a surface due to simultaneous directed ion beam etching and sputter deposition of an impurity material. This model accurately predicts both a minimum temperature for texturing to take place and the variation of cone density with temperature. It also provides the correct order of magnitude of cone separation. It was predicted from the model, and subsequently verified experimentally, that a high sputter yield material could serve as a seed for coning of a lower sputter yield substrate. Seeding geometries and seed deposition rates were studied to obtain an important input to the theoretical texturing model.
Critical role of morphology on the dielectric constant of semicrystalline polyolefins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Misra, Mayank; Kumar, Sanat K., E-mail: sk2794@columbia.edu; Mannodi-Kanakkithodi, Arun
2016-06-21
A particularly attractive method to predict the dielectric properties of materials is density functional theory (DFT). While this method is very popular, its large computational requirements allow practical treatments of unit cells with just a small number of atoms in an ordered array, i.e., in a crystalline morphology. By comparing DFT and Molecular Dynamics (MD) simulations on the same ordered arrays of functional polyolefins, we confirm that both methodologies yield identical estimates for the dipole moments and hence the ionic component of the dielectric storage modulus. Additionally, MD simulations of more realistic semi-crystalline morphologies yield estimates for this polar contributionmore » that are in good agreement with the limited experiments in this field. However, these predictions are up to 10 times larger than those for pure crystalline simulations. Here, we show that the constraints provided by the surrounding chains significantly impede dipolar relaxations in the crystalline regions, whereas amorphous chains must sample all configurations to attain their fully isotropic spatial distributions. These results, which suggest that the amorphous phase is the dominant player in the context, argue strongly that the proper polymer morphology needs to be modeled to ensure accurate estimates of the ionic component of the dielectric constant.« less
Meseret, S.; Tamir, B.; Gebreyohannes, G.; Lidauer, M.; Negussie, E.
2015-01-01
The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM) against the random regression test-day model (RRM) in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD) records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations. PMID:26194217
Nayana, M Ravi Shashi; Sekhar, Y Nataraja; Nandyala, Haritha; Muttineni, Ravikumar; Bairy, Santosh Kumar; Singh, Kriti; Mahmood, S K
2008-10-01
In the present study, a series of 179 quinoline and quinazoline heterocyclic analogues exhibiting inhibitory activity against Gastric (H+/K+)-ATPase were investigated using the comparative molecular field analysis (CoMFA) and comparative molecular similarity indices (CoMSIA) methods. Both the models exhibited good correlation between the calculated 3D-QSAR fields and the observed biological activity for the respective training set compounds. The most optimal CoMFA and CoMSIA models yielded significant leave-one-out cross-validation coefficient, q(2) of 0.777, 0.744 and conventional cross-validation coefficient, r(2) of 0.927, 0.914 respectively. The predictive ability of generated models was tested on a set of 52 compounds having broad range of activity. CoMFA and CoMSIA yielded predicted activities for test set compounds with r(pred)(2) of 0.893 and 0.917 respectively. These validation tests not only revealed the robustness of the models but also demonstrated that for our models r(pred)(2) based on the mean activity of test set compounds can accurately estimate external predictivity. The factors affecting activity were analyzed carefully according to standard coefficient contour maps of steric, electrostatic, hydrophobic, acceptor and donor fields derived from the CoMFA and CoMSIA. These contour plots identified several key features which explain the wide range of activities. The results obtained from models offer important structural insight into designing novel peptic-ulcer inhibitors prior to their synthesis.
Validation of the Unthinned Loblolly Pine Plantation Yield Model-USLYCOWG
V. Clark Baldwin; D.P. Feduccia
1982-01-01
Yield and stand structure predictions from an unthinned loblolly pine plantation yield prediction system (USLYCOWG computer program) were compared with observations from 80 unthinned loblolly pine plots. Overall, the predicted estimates were reasonable when compared to observed values, but predictions based on input data at or near the system's limits may be in...
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Kostoff, J. L.; Ward, D. T.; Cuevas, O. O.; Beckman, R. M.
1995-01-01
Tracking and Data Relay Satellite (TDRS) orbit determination and prediction are supported by the Flight Dynamics Facility (FDF) of the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). TDRS System (TDRSS)-user satellites require predicted TDRS ephemerides that are up to 10 weeks in length. Previously, long-term ephemerides generated by the FDF included predictions from the White Sands Complex (WSC), which plans and executes TDRS maneuvers. TDRSs typically have monthly stationkeeping maneuvers, and predicted postmaneuver state vectors are received from WSC up to a month in advance. This paper presents the results of an analysis performed in the FDF to investigate more accurate and economical long-term ephemerides for the TDRSs. As a result of this analysis, two new methods for generating long-term TDRS ephemeris predictions have been implemented by the FDF. The Center-of-Box (COB) method models a TDRS as fixed at the center of its stationkeeping box. Using this method, long-term ephemeris updates are made semiannually instead of weekly. The impulse method is used to model more maneuvers. The impulse method yields better short-term accuracy than the COB method, especially for larger stationkeeping boxes. The accuracy of the impulse method depends primarily on the accuracy of maneuver date forecasting.
NASA Astrophysics Data System (ADS)
Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John
2001-01-01
For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.
Simulated Impacts of Climate Change on Water Use and Yield of Irrigated Sugarcane in South Africa
NASA Technical Reports Server (NTRS)
Jones, M.R; Singels, A.; Ruane, A. C.
2015-01-01
Reliable predictions of climate change impacts on water use, irrigation requirements and yields of irrigated sugarcane in South Africa (a water-scarce country) are necessary to plan adaptation strategies. Although previous work has been done in this regard, methodologies and results vary considerably. The objectives were (1) to estimate likely impacts of climate change on sugarcane yields, water use and irrigation demand at three irrigated sugarcane production sites in South Africa (Malelane, Pongola and La Mercy) for current (1980-2010) and future (2070-2100) climate scenarios, using an approach based on the Agricultural Model Inter-comparison and Improvement Project (AgMIP) protocols; and (2) to assess the suitability of this methodology for investigating climate change impacts on sugarcane production. Future climate datasets were generated using the Delta downscaling method and three Global Circulation Models (GCMs) assuming atmospheric CO2 concentration [CO2] of 734 ppm(A2 emissions scenario). Yield and water use were simulated using the DSSAT-Canegro v4.5 model. Irrigated cane yields are expected to increase at all three sites (between 11 and 14%), primarily due to increased interception of radiation as a result of accelerated canopy development. Evapotranspiration and irrigation requirements increased by 11% due to increased canopy cover and evaporative demand. Sucrose yields are expected to decline because of increased consumption of photo-assimilate for structural growth and maintenance respiration. Crop responses in canopy development and yield formation differed markedly between the crop cycles investigated. Possible agronomic implications of these results include reduced weed control costs due to shortened periods of partial canopy, a need for improved efficiency of irrigation to counter increased demands, and adjustments to ripening and harvest practices to counter decreased cane quality and optimize productivity. Although the Delta climate data downscaling method is considered robust, accurate and easily-understood, it does not change the future number of rain-days per month. The impacts of this and other climate data simplifications ought to be explored in future work. Shortcomings of the DSSAT-Canegro model include the simulated responses of phenological development, photosynthesis and respiration processes to high temperatures, and the disconnect between simulated biomass accumulation and expansive growth. Proposed methodology refinements should improve the reliability of predicted climate change impacts on sugarcane yield.
Portevin-Le Chatelier effect under cyclic loading: experimental and numerical investigations
NASA Astrophysics Data System (ADS)
Mazière, M.; Pujol d'Andrebo, Q.
2015-10-01
The Portevin-Le Chatelier (PLC) effect is generally evidenced by the apparition of serrated yielding under monotonic tensile loading conditions. It appears at room temperature in some aluminium alloys, around ? in some steels and in many other metallic materials. This effect is associated with the propagation of bands of plastic deformation in tensile specimens and can in some cases lead to unexpected failures. The PLC effect has been widely simulated under monotonic conditions using finite elements and an appropriate mechanical model able to reproduce serrations and strain localization. The occurrence of serrations can be predicted using an analytical stability analysis. Recently, this serrated yielding has also been observed in specimens made of Cobalt-based superalloy under cyclic loading, after a large number of cycles. The mechanical model has been identified in this case to accurately reproduce this critical number of cycle where serrations appear. The associated apparition of localized bands of deformation in specimens and their influence on its failure has also been investigated using finite element simulations.
NASA Astrophysics Data System (ADS)
Banka, John Czeslaw
The world strives for more clean and renewable energy, but the amount of dispatchable energy in river networks is not accurately known and difficult to assess. When wind is integrated with water, the dispatchable yield can be greatly increased, but the uncertainty of the wind further degrades predictability. This thesis demonstrates how simulating the flows is a river network integrated with wind over a long time domain yields a solution. Time-shifting the freshet and pumped storage will ameliorate the seasonal summer drought; the risk of ice jams and uncontrolled flooding is reduced. An artificial market eliminates the issue of surplus energy from wind at night. Furthermore, this thesis shows how the necessary infrastructure can be built to accomplish the goals of the intended research. While specific to Northern Ontario and sensitive to the lives of the Native peoples living there, it indicates where the research might be applicable elsewhere in the world.
Modeling residence-time distribution in horizontal screw hydrolysis reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sievers, David A.; Stickel, Jonathan J.
The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less
Modeling residence-time distribution in horizontal screw hydrolysis reactors
Sievers, David A.; Stickel, Jonathan J.
2017-10-12
The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less
Towards psychologically adaptive brain-computer interfaces
NASA Astrophysics Data System (ADS)
Myrden, A.; Chau, T.
2016-12-01
Objective. Brain-computer interface (BCI) performance is sensitive to short-term changes in psychological states such as fatigue, frustration, and attention. This paper explores the design of a BCI that can adapt to these short-term changes. Approach. Eleven able-bodied individuals participated in a study during which they used a mental task-based EEG-BCI to play a simple maze navigation game while self-reporting their perceived levels of fatigue, frustration, and attention. In an offline analysis, a regression algorithm was trained to predict changes in these states, yielding Pearson correlation coefficients in excess of 0.45 between the self-reported and predicted states. Two means of fusing the resultant mental state predictions with mental task classification were investigated. First, single-trial mental state predictions were used to predict correct classification by the BCI during each trial. Second, an adaptive BCI was designed that retrained a new classifier for each testing sample using only those training samples for which predicted mental state was similar to that predicted for the current testing sample. Main results. Mental state-based prediction of BCI reliability exceeded chance levels. The adaptive BCI exhibited significant, but practically modest, increases in classification accuracy for five of 11 participants and no significant difference for the remaining six despite a smaller average training set size. Significance. Collectively, these findings indicate that adaptation to psychological state may allow the design of more accurate BCIs.
Hansmann, Jan; Evers, Maximilian J; Bui, James T; Lokken, R Peter; Lipnik, Andrew J; Gaba, Ron C; Ray, Charles E
2017-09-01
To evaluate albumin-bilirubin (ALBI) and platelet-albumin-bilirubin (PALBI) grades in predicting overall survival in high-risk patients undergoing conventional transarterial chemoembolization for hepatocellular carcinoma (HCC). This single-center retrospective study included 180 high-risk patients (142 men, 59 y ± 9) between April 2007 and January 2015. Patients were considered high-risk based on laboratory abnormalities before the procedure (bilirubin > 2.0 mg/dL, albumin < 3.5 mg/dL, platelet count < 60,000/mL, creatinine > 1.2 mg/dL); presence of ascites, encephalopathy, portal vein thrombus, or transjugular intrahepatic portosystemic shunt; or Model for End-Stage Liver Disease score > 15. Serum albumin, bilirubin, and platelet values were used to determine ALBI and PALBI grades. Overall survival was stratified by ALBI and PALBI grades with substratification by Child-Pugh class (CPC) and Barcelona Liver Clinic Cancer (BCLC) stage using Kaplan-Meier analysis. C-index was used to determine discriminatory ability and survival prediction accuracy. Median survival for 79 ALBI grade 2 patients and 101 ALBI grade 3 patients was 20.3 and 10.7 months, respectively (P < .0001). Median survival for 30 PALBI grade 2 and 144 PALBI grade 3 patients was 20.3 and 12.9 months, respectively (P = .0667). Substratification yielded distinct ALBI grade survival curves for CPC B (P = .0022, C-index 0.892), BCLC A (P = .0308, C-index 0.887), and BCLC C (P = .0287, C-index 0.839). PALBI grade demonstrated distinct survival curves for BCLC A (P = 0.0229, C-index 0.869). CPC yielded distinct survival curves for the entire cohort (P = .0019) but not when substratified by BCLC stage (all P > .05). ALBI and PALBI grades are accurate survival metrics in high-risk patients undergoing conventional transarterial chemoembolization for HCC. Use of these scores allows for more refined survival stratification within CPC and BCLC stage. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
Motor system contribution to action prediction: Temporal accuracy depends on motor experience.
Stapel, Janny C; Hunnius, Sabine; Meyer, Marlene; Bekkering, Harold
2016-03-01
Predicting others' actions is essential for well-coordinated social interactions. In two experiments including an infant population, this study addresses to what extent motor experience of an observer determines prediction accuracy for others' actions. Results show that infants who were proficient crawlers but inexperienced walkers predicted crawling more accurately than walking, whereas age groups mastering both skills (i.e. toddlers and adults) were equally accurate in predicting walking and crawling. Regardless of experience, human movements were predicted more accurately by all age groups than non-human movement control stimuli. This suggests that for predictions to be accurate, the observed act needs to be established in the motor repertoire of the observer. Through the acquisition of new motor skills, we also become better at predicting others' actions. The findings thus stress the relevance of motor experience for social-cognitive development. Copyright © 2015 Elsevier B.V. All rights reserved.
Caraviello, D Z; Weigel, K A; Gianola, D
2004-05-01
Predicted transmitting abilities (PTA) of US Jersey sires for daughter longevity were calculated using a Weibull proportional hazards sire model and compared with predictions from a conventional linear animal model. Culling data from 268,008 Jersey cows with first calving from 1981 to 2000 were used. The proportional hazards model included time-dependent effects of herd-year-season contemporary group and parity by stage of lactation interaction, as well as time-independent effects of sire and age at first calving. Sire variances and parameters of the Weibull distribution were estimated, providing heritability estimates of 4.7% on the log scale and 18.0% on the original scale. The PTA of each sire was expressed as the expected risk of culling relative to daughters of an average sire. Risk ratios (RR) ranged from 0.7 to 1.3, indicating that the risk of culling for daughters of the best sires was 30% lower than for daughters of average sires and nearly 50% lower than than for daughters of the poorest sires. Sire PTA from the proportional hazards model were compared with PTA from a linear model similar to that used for routine national genetic evaluation of length of productive life (PL) using cross-validation in independent samples of herds. Models were compared using logistic regression of daughters' stayability to second, third, fourth, or fifth lactation on their sires' PTA values, with alternative approaches for weighting the contribution of each sire. Models were also compared using logistic regression of daughters' stayability to 36, 48, 60, 72, and 84 mo of life. The proportional hazards model generally yielded more accurate predictions according to these criteria, but differences in predictive ability between methods were smaller when using a Kullback-Leibler distance than with other approaches. Results of this study suggest that survival analysis methodology may provide more accurate predictions of genetic merit for longevity than conventional linear models.
Third molar development: measurements versus scores as age predictor.
Thevissen, P W; Fieuws, S; Willems, G
2011-10-01
Human third molar development is widely used to predict chronological age of sub adult individuals with unknown or doubted age. For these predictions, classically, the radiologically observed third molar growth and maturation is registered using a staging and related scoring technique. Measures of lengths and widths of the developing wisdom tooth and its adjacent second molar can be considered as an alternative registration. The aim of this study was to verify relations between mandibular third molar developmental stages or measurements of mandibular second molar and third molars and age. Age related performance of stages and measurements were compared to assess if measurements added information to age predictions from third molar formation stage. The sample was 340 orthopantomograms (170 females, 170 males) of individuals homogenously distributed in age between 7 and 24 years. Mandibular lower right, third and second molars, were staged following Gleiser and Hunt, length and width measurements were registered, and various ratios of these measurements were calculated. Univariable regression models with age as response and third molar stage, measurements and ratios of second and third molars as predictors, were considered. Multivariable regression models assessed if measurements or ratios added information to age prediction from third molar stage. Coefficients of determination (R(2)) and root mean squared errors (RMSE) obtained from all regression models were compared. The univariable regression model using stages as predictor yielded most accurate age predictions (males: R(2) 0.85, RMSE between 0.85 and 1.22 year; females: R(2) 0.77, RMSE between 1.19 and 2.11 year) compared to all models including measurements and ratios. The multivariable regression models indicated that measurements and ratios added no clinical relevant information to the age prediction from third molar stage. Ratios and measurements of second and third molars are less accurate age predictors than stages of developing third molars. Copyright © 2011 Elsevier Ltd. All rights reserved.
Legarra, A; Baloche, G; Barillet, F; Astruc, J M; Soulas, C; Aguerre, X; Arrese, F; Mintegi, L; Lasarte, M; Maeztu, F; Beltrán de Heredia, I; Ugarte, E
2014-05-01
Genotypes, phenotypes and pedigrees of 6 breeds of dairy sheep (including subdivisions of Latxa, Manech, and Basco-Béarnaise) from the Spain and France Western Pyrenees were used to estimate genetic relationships across breeds (together with genotypes from the Lacaune dairy sheep) and to verify by forward cross-validation single-breed or multiple-breed genetic evaluations. The number of rams genotyped fluctuated between 100 and 1,300 but generally represented the 10 last cohorts of progeny-tested rams within each breed. Genetic relationships were assessed by principal components analysis of the genomic relationship matrices and also by the conservation of linkage disequilibrium patterns at given physical distances in the genome. Genomic and pedigree-based evaluations used daughter yield performances of all rams, although some of them were not genotyped. A pseudo-single step method was used in this case for genomic predictions. Results showed a clear structure in blond and black breeds for Manech and Latxa, reflecting historical exchanges, and isolation of Basco-Béarnaise and Lacaune. Relatedness between any 2 breeds was, however, lower than expected. Single-breed genomic predictions had accuracies comparable with other breeds of dairy sheep or small breeds of dairy cattle. They were more accurate than pedigree predictions for 5 out of 6 breeds, with absolute increases in accuracy ranging from 0.05 to 0.30 points. They were significantly better, as assessed by bootstrapping of candidates, for 2 of the breeds. Predictions using multiple populations only marginally increased the accuracy for a couple of breeds. Pooling populations does not increase the accuracy of genomic evaluations in dairy sheep; however, single-breed genomic predictions are more accurate, even for small breeds, and make the consideration of genomic schemes in dairy sheep interesting. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hanson, Jack; Paliwal, Kuldip; Litfin, Thomas; Yang, Yuedong; Zhou, Yaoqi
2018-06-19
Accurate prediction of a protein contact map depends greatly on capturing as much contextual information as possible from surrounding residues for a target residue pair. Recently, ultra-deep residual convolutional networks were found to be state-of-the-art in the latest Critical Assessment of Structure Prediction techniques (CASP12, (Schaarschmidt et al., 2018)) for protein contact map prediction by attempting to provide a protein-wide context at each residue pair. Recurrent neural networks have seen great success in recent protein residue classification problems due to their ability to propagate information through long protein sequences, especially Long Short-Term Memory (LSTM) cells. Here we propose a novel protein contact map prediction method by stacking residual convolutional networks with two-dimensional residual bidirectional recurrent LSTM networks, and using both one-dimensional sequence-based and two-dimensional evolutionary coupling-based information. We show that the proposed method achieves a robust performance over validation and independent test sets with the Area Under the receiver operating characteristic Curve (AUC)>0.95 in all tests. When compared to several state-of-the-art methods for independent testing of 228 proteins, the method yields an AUC value of 0.958, whereas the next-best method obtains an AUC of 0.909. More importantly, the improvement is over contacts at all sequence-position separations. Specifically, a 8.95%, 5.65% and 2.84% increase in precision were observed for the top L∕10 predictions over the next best for short, medium and long-range contacts, respectively. This confirms the usefulness of ResNets to congregate the short-range relations and 2D-BRLSTM to propagate the long-range dependencies throughout the entire protein contact map 'image'. SPOT-Contact server url: http://sparks-lab.org/jack/server/SPOT-Contact/. Supplementary data is available at Bioinformatics online.
Adnan, Adnan A.; Jibrin, Jibrin M.; Kamara, Alpha Y.; Abdulrahman, Bassam L.; Shaibu, Abdulwahab S.; Garba, Ismail I.
2017-01-01
Field trials were carried out in the Sudan Savannah of Nigeria to assess the usefulness of CERES–maize crop model as a decision support tool for optimizing maize production through manipulation of plant dates. The calibration experiments comprised of 20 maize varieties planted during the dry and rainy seasons of 2014 and 2015 at Bayero University Kano and Audu Bako College of Agriculture Dambatta. The trials for model evaluation were conducted in 16 different farmer fields across the Sudan (Bunkure and Garun—Mallam) and Northern Guinea (Tudun-Wada and Lere) Savannas using two of the calibrated varieties under four different sowing dates. The model accurately predicted grain yield, harvest index, and biomass of both varieties with low RMSE-values (below 5% of mean), high d-index (above 0.8), and high r-square (above 0.9) for the calibration trials. The time series data (tops weight, stem and leaf dry weights) were also predicted with high accuracy (% RMSEn above 70%, d-index above 0.88). Similar results were also observed for the evaluation trials, where all variables were simulated with high accuracies. Estimation efficiencies (EF)-values above 0.8 were observed for all the evaluation parameters. Seasonal and sensitivity analyses on Typic Plinthiustalfs and Plinthic Kanhaplustults in the Sudan and Northern Guinea Savannas were conducted. Results showed that planting extra early maize varieties in late July and early maize in mid-June leads to production of highest grain yields in the Sudan Savanna. In the Northern Guinea Savanna planting extra-early maize in mid-July and early maize in late July produced the highest grain yields. Delaying planting in both Agro-ecologies until mid-August leads to lower yields. Delaying planting to mid-August led to grain yield reduction of 39.2% for extra early maize and 74.4% for early maize in the Sudan Savanna. In the Northern Guinea Savanna however, delaying planting to mid-August resulted in yield reduction of 66.9 and 94.3% for extra-early and early maize, respectively. PMID:28702039
Satellite-based assessment of yield variation and its determinants in smallholder African systems
Lobell, David B.
2017-01-01
The emergence of satellite sensors that can routinely observe millions of individual smallholder farms raises possibilities for monitoring and understanding agricultural productivity in many regions of the world. Here we demonstrate the potential to track smallholder maize yield variation in western Kenya, using a combination of 1-m Terra Bella imagery and intensive field sampling on thousands of fields over 2 y. We find that agreement between satellite-based and traditional field survey-based yield estimates depends significantly on the quality of the field-based measures, with agreement highest (R2 up to 0.4) when using precise field measures of plot area and when using larger fields for which rounding errors are smaller. We further show that satellite-based measures are able to detect positive yield responses to fertilizer and hybrid seed inputs and that the inferred responses are statistically indistinguishable from estimates based on survey-based yields. These results suggest that high-resolution satellite imagery can be used to make predictions of smallholder agricultural productivity that are roughly as accurate as the survey-based measures traditionally used in research and policy applications, and they indicate a substantial near-term potential to quickly generate useful datasets on productivity in smallholder systems, even with minimal or no field training data. Such datasets could rapidly accelerate learning about which interventions in smallholder systems have the most positive impact, thus enabling more rapid transformation of rural livelihoods. PMID:28202728
Satellite-based assessment of yield variation and its determinants in smallholder African systems.
Burke, Marshall; Lobell, David B
2017-02-28
The emergence of satellite sensors that can routinely observe millions of individual smallholder farms raises possibilities for monitoring and understanding agricultural productivity in many regions of the world. Here we demonstrate the potential to track smallholder maize yield variation in western Kenya, using a combination of 1-m Terra Bella imagery and intensive field sampling on thousands of fields over 2 y. We find that agreement between satellite-based and traditional field survey-based yield estimates depends significantly on the quality of the field-based measures, with agreement highest ([Formula: see text] up to 0.4) when using precise field measures of plot area and when using larger fields for which rounding errors are smaller. We further show that satellite-based measures are able to detect positive yield responses to fertilizer and hybrid seed inputs and that the inferred responses are statistically indistinguishable from estimates based on survey-based yields. These results suggest that high-resolution satellite imagery can be used to make predictions of smallholder agricultural productivity that are roughly as accurate as the survey-based measures traditionally used in research and policy applications, and they indicate a substantial near-term potential to quickly generate useful datasets on productivity in smallholder systems, even with minimal or no field training data. Such datasets could rapidly accelerate learning about which interventions in smallholder systems have the most positive impact, thus enabling more rapid transformation of rural livelihoods.
Generalization of dielectric-dependent hybrid functionals to finite systems
Brawand, Nicholas P.; Voros, Marton; Govoni, Marco; ...
2016-10-04
The accurate prediction of electronic and optical properties of molecules and solids is a persistent challenge for methods based on density functional theory. We propose a generalization of dielectric-dependent hybrid functionals to finite systems where the definition of the mixing fraction of exact and semilocal exchange is physically motivated, nonempirical, and system dependent. The proposed functional yields ionization potentials, and fundamental and optical gaps of many, diverse molecular systems in excellent agreement with experiments, including organic and inorganic molecules and semiconducting nanocrystals. As a result, we further demonstrate that this hybrid functional gives the correct alignment between energy levels ofmore » the exemplary TTF-TCNQ donor-acceptor system.« less
Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.
Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon
2017-05-01
Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.
Blood vessels segmentation of hatching eggs based on fully convolutional networks
NASA Astrophysics Data System (ADS)
Geng, Lei; Qiu, Ling; Wu, Jun; Xiao, Zhitao
2018-04-01
FCN, trained end-to-end, pixels-to-pixels, predict result of each pixel. It has been widely used for semantic segmentation. In order to realize the blood vessels segmentation of hatching eggs, a method based on FCN is proposed in this paper. The training datasets are composed of patches extracted from very few images to augment data. The network combines with lower layer and deconvolution to enables precise segmentation. The proposed method frees from the problem that training deep networks need large scale samples. Experimental results on hatching eggs demonstrate that this method can yield more accurate segmentation outputs than previous researches. It provides a convenient reference for fertility detection subsequently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
NASA Astrophysics Data System (ADS)
Draper, D. C.; Farmer, D. K.; Desyaterik, Y.; Fry, J. L.
2015-11-01
The effect of NO2 on secondary organic aerosol (SOA) formation from ozonolysis of α-pinene, β-pinene, Δ3-carene, and limonene was investigated using a dark flow-through reaction chamber. SOA mass yields were calculated for each monoterpene from ozonolysis with varying NO2 concentrations. Kinetics modeling of the first-generation gas-phase chemistry suggests that differences in observed aerosol yields for different NO2 concentrations are consistent with NO3 formation and subsequent competition between O3 and NO3 to oxidize each monoterpene. α-Pinene was the only monoterpene studied that showed a systematic decrease in both aerosol number concentration and mass concentration with increasing [NO2]. β-Pinene and Δ3-carene produced fewer particles at higher [NO2], but both retained moderate mass yields. Limonene exhibited both higher number concentrations and greater mass concentrations at higher [NO2]. SOA from each experiment was collected and analyzed by HPLC-ESI-MS, enabling comparisons between product distributions for each system. In general, the systems influenced by NO3 oxidation contained more high molecular weight products (MW > 400 amu), suggesting the importance of oligomerization mechanisms in NO3-initiated SOA formation. α-Pinene, which showed anomalously low aerosol mass yields in the presence of NO2, showed no increase in these oligomer peaks, suggesting that lack of oligomer formation is a likely cause of α-pinene's near 0 % yields with NO3. Through direct comparisons of mixed-oxidant systems, this work suggests that NO3 is likely to dominate nighttime oxidation pathways in most regions with both biogenic and anthropogenic influences. Therefore, accurately constraining SOA yields from NO3 oxidation, which vary substantially with the volatile organic compound precursor, is essential in predicting nighttime aerosol production.
NASA Astrophysics Data System (ADS)
Draper, D. C.; Farmer, D. K.; Desyaterik, Y.; Fry, J. L.
2015-05-01
The effect of NO2 on secondary organic aerosol (SOA) formation from ozonolysis of α-pinene, β-pinene, Δ3-carene, and limonene was investigated using a dark flow-through reaction chamber. SOA mass yields were calculated for each monoterpene from ozonolysis with varying NO2 concentrations. Kinetics modeling of the first generation gas-phase chemistry suggests that differences in observed aerosol yields for different NO2 concentrations are consistent with NO3 formation and subsequent competition between O3 and NO3 to oxidize each monoterpene. α-pinene was the only monoterpene studied that showed a systematic decrease in both aerosol number concentration and mass concentration with increasing [NO2]. β-pinene and Δ3-carene produced fewer particles at higher [NO2], but both retained moderate mass yields. Limonene exhibited both higher number concentrations and greater mass concentrations at higher [NO2]. SOA from each experiment was collected and analyzed by HPLC-ESI-MS, enabling comparisons between product distributions for each system. In general, the systems influenced by NO3 oxidation contained more high molecular weight products (MW >400 amu), suggesting the importance of oligomerization mechanisms in NO3-initiated SOA formation. α-pinene, which showed anomalously low aerosol mass yields in the presence of NO2, showed no increase in these oligomer peaks, suggesting that lack of oligomer formation is a likely cause of α-pinene's near 0% yields with NO3. Through direct comparisons of mixed-oxidant systems, this work suggests that NO3 is likely to dominate nighttime oxidation pathways in most regions with both biogenic and anthropogenic influences. Therefore, accurately constraining SOA yields from NO3 oxidation, which vary substantially with the VOC precursor, is essential in predicting nighttime aerosol production.
Keating, Brendan; Bansal, Aruna T; Walsh, Susan; Millman, Jonathan; Newman, Jonathan; Kidd, Kenneth; Budowle, Bruce; Eisenberg, Arthur; Donfack, Joseph; Gasparini, Paolo; Budimlija, Zoran; Henders, Anjali K; Chandrupatla, Hareesh; Duffy, David L; Gordon, Scott D; Hysi, Pirro; Liu, Fan; Medland, Sarah E; Rubin, Laurence; Martin, Nicholas G; Spector, Timothy D; Kayser, Manfred
2013-05-01
When a forensic DNA sample cannot be associated directly with a previously genotyped reference sample by standard short tandem repeat profiling, the investigation required for identifying perpetrators, victims, or missing persons can be both costly and time consuming. Here, we describe the outcome of a collaborative study using the Identitas Version 1 (v1) Forensic Chip, the first commercially available all-in-one tool dedicated to the concept of developing intelligence leads based on DNA. The chip allows parallel interrogation of 201,173 genome-wide autosomal, X-chromosomal, Y-chromosomal, and mitochondrial single nucleotide polymorphisms for inference of biogeographic ancestry, appearance, relatedness, and sex. The first assessment of the chip's performance was carried out on 3,196 blinded DNA samples of varying quantities and qualities, covering a wide range of biogeographic origin and eye/hair coloration as well as variation in relatedness and sex. Overall, 95 % of the samples (N = 3,034) passed quality checks with an overall genotype call rate >90 % on variable numbers of available recorded trait information. Predictions of sex, direct match, and first to third degree relatedness were highly accurate. Chip-based predictions of biparental continental ancestry were on average ~94 % correct (further support provided by separately inferred patrilineal and matrilineal ancestry). Predictions of eye color were 85 % correct for brown and 70 % correct for blue eyes, and predictions of hair color were 72 % for brown, 63 % for blond, 58 % for black, and 48 % for red hair. From the 5 % of samples (N = 162) with <90 % call rate, 56 % yielded correct continental ancestry predictions while 7 % yielded sufficient genotypes to allow hair and eye color prediction. Our results demonstrate that the Identitas v1 Forensic Chip holds great promise for a wide range of applications including criminal investigations, missing person investigations, and for national security purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroniger, K; Herzog, M; Landry, G
2015-06-15
Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less
Mogaji, Kehinde Anthony; Lim, Hwee San
2017-07-01
This study integrates the application of Dempster-Shafer-driven evidential belief function (DS-EBF) methodology with remote sensing and geographic information system techniques to analyze surface and subsurface data sets for the spatial prediction of groundwater potential in Perak Province, Malaysia. The study used additional data obtained from the records of the groundwater yield rate of approximately 28 bore well locations. The processed surface and subsurface data produced sets of groundwater potential conditioning factors (GPCFs) from which multiple surface hydrologic and subsurface hydrogeologic parameter thematic maps were generated. The bore well location inventories were partitioned randomly into a ratio of 70% (19 wells) for model training to 30% (9 wells) for model testing. Application results of the DS-EBF relationship model algorithms of the surface- and subsurface-based GPCF thematic maps and the bore well locations produced two groundwater potential prediction (GPP) maps based on surface hydrologic and subsurface hydrogeologic characteristics which established that more than 60% of the study area falling within the moderate-high groundwater potential zones and less than 35% falling within the low potential zones. The estimated uncertainty values within the range of 0 to 17% for the predicted potential zones were quantified using the uncertainty algorithm of the model. The validation results of the GPP maps using relative operating characteristic curve method yielded 80 and 68% success rates and 89 and 53% prediction rates for the subsurface hydrogeologic factor (SUHF)- and surface hydrologic factor (SHF)-based GPP maps, respectively. The study results revealed that the SUHF-based GPP map accurately delineated groundwater potential zones better than the SHF-based GPP map. However, significant information on the low degree of uncertainty of the predicted potential zones established the suitability of the two GPP maps for future development of groundwater resources in the area. The overall results proved the efficacy of the data mining model and the geospatial technology in groundwater potential mapping.
Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong; Malik, Waqar; Jung, Yoon C.
2016-01-01
Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.
Exploring the knowledge behind predictions in everyday cognition: an iterated learning study.
Stephens, Rachel G; Dunn, John C; Rao, Li-Lin; Li, Shu
2015-10-01
Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.
Boulay, Christophe; Bollini, Gérard; Legaye, Jean; Tardieu, Christine; Prat-Pradal, Dominique; Chabrol, Brigitte; Jouve, Jean-Luc; Duval-Beaupère, Ginette; Pélissier, Jacques
2014-01-01
Acetabular cup orientation (inclination and anteversion) is a fundamental topic in orthopaedics and depends on pelvis tilt (positional parameter) emphasising the notion of a safe range of pelvis tilt. The hypothesis was that pelvic incidence (morphologic parameter) could yield a more accurate and reliable assessment than pelvis tilt. The aim was to find out a predictive equation of acetabular 3D orientation parameters which were determined by pelvic incidence to include in the model. The second aim was to consider the asymmetry between the right and left acetabulae. Twelve pelvic anatomic specimens were measured with an electromagnetic Fastrak system (Polhemus Society) providing 3D position of anatomical landmarks to allow measurement of acetabular and pelvic parameters. Acetabulum and pelvis data were correlated by a Spearman matrix. A robust linear regression analysis provided prediction of acetabulum axes. The orientation of each acetabulum could be predicted by the incidence. The incidence is correlated with the morphology of acetabula. The asymmetry of the acetabular roof was correlated with pelvic incidence. This study allowed analysis of relationships of acetabular orientation and pelvic incidence. Pelvic incidence (morphologic parameter) could determine the safe range of pelvis tilt (positional parameter) for an individual and not a group.
Bollini, Gérard; Legaye, Jean; Tardieu, Christine; Prat-Pradal, Dominique; Chabrol, Brigitte; Jouve, Jean-Luc; Duval-Beaupère, Ginette; Pélissier, Jacques
2014-01-01
Acetabular cup orientation (inclination and anteversion) is a fundamental topic in orthopaedics and depends on pelvis tilt (positional parameter) emphasising the notion of a safe range of pelvis tilt. The hypothesis was that pelvic incidence (morphologic parameter) could yield a more accurate and reliable assessment than pelvis tilt. The aim was to find out a predictive equation of acetabular 3D orientation parameters which were determined by pelvic incidence to include in the model. The second aim was to consider the asymmetry between the right and left acetabulae. Twelve pelvic anatomic specimens were measured with an electromagnetic Fastrak system (Polhemus Society) providing 3D position of anatomical landmarks to allow measurement of acetabular and pelvic parameters. Acetabulum and pelvis data were correlated by a Spearman matrix. A robust linear regression analysis provided prediction of acetabulum axes. The orientation of each acetabulum could be predicted by the incidence. The incidence is correlated with the morphology of acetabula. The asymmetry of the acetabular roof was correlated with pelvic incidence. This study allowed analysis of relationships of acetabular orientation and pelvic incidence. Pelvic incidence (morphologic parameter) could determine the safe range of pelvis tilt (positional parameter) for an individual and not a group. PMID:25006461
Accurate Prediction of Drug-Induced Liver Injury Using Stem Cell-Derived Populations
Szkolnicka, Dagmara; Farnworth, Sarah L.; Lucendo-Villarin, Baltasar; Storck, Christopher; Zhou, Wenli; Iredale, John P.; Flint, Oliver
2014-01-01
Despite major progress in the knowledge and management of human liver injury, there are millions of people suffering from chronic liver disease. Currently, the only cure for end-stage liver disease is orthotopic liver transplantation; however, this approach is severely limited by organ donation. Alternative approaches to restoring liver function have therefore been pursued, including the use of somatic and stem cell populations. Although such approaches are essential in developing scalable treatments, there is also an imperative to develop predictive human systems that more effectively study and/or prevent the onset of liver disease and decompensated organ function. We used a renewable human stem cell resource, from defined genetic backgrounds, and drove them through developmental intermediates to yield highly active, drug-inducible, and predictive human hepatocyte populations. Most importantly, stem cell-derived hepatocytes displayed equivalence to primary adult hepatocytes, following incubation with known hepatotoxins. In summary, we have developed a serum-free, scalable, and shippable cell-based model that faithfully predicts the potential for human liver injury. Such a resource has direct application in human modeling and, in the future, could play an important role in developing renewable cell-based therapies. PMID:24375539
SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.
Nik, S J; Thing, R S; Watts, R; Meyer, J
2012-06-01
To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Kirshman, David
A numerical method for the solution of inviscid compressible flow using an array of embedded Cartesian meshes in conjunction with gridless surface boundary conditions is developed. The gridless boundary treatment is implemented by means of a least squares fitting of the conserved flux variables using a cloud of nodes in the vicinity of the surface geometry. The method allows for accurate treatment of the surface boundary conditions using a grid resolution an order of magnitude coarser than required of typical Cartesian approaches. Additionally, the method does not suffer from issues associated with thin body geometry or extremely fine cut cells near the body. Unlike some methods that consider a gridless (or "meshless") treatment throughout the entire domain, multi-grid acceleration can be effectively incorporated and issues associated with global conservation are alleviated. The "gridless" surface boundary condition provides for efficient and simple problem set up since definition of the body geometry is generated independently from the field mesh, and automatically incorporated into the field discretization of the domain. The applicability of the method is first demonstrated for steady flow of single and multi-element airfoil configurations. Using this method, comparisons with traditional body-fitted grid simulations reveal that steady flow solutions can be obtained accurately with minimal effort associated with grid generation. The method is then extended to unsteady flow predictions. In this application, flow field simulations for the prescribed oscillation of an airfoil indicate excellent agreement with experimental data. Furthermore, it is shown that the phase lag associated with shock oscillation is accurately predicted without the need for a deformable mesh. Lastly, the method is applied to the prediction of transonic flutter using a two-dimensional wing model, in which comparisons with moving mesh simulations yield nearly identical results. As a result, applicability of the method to transient and vibrating fluid-structure interaction problems is established in which the requirement for a deformable mesh is eliminated.
CUFID-query: accurate network querying through random walk based network flow estimation.
Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun
2017-12-28
Functional modules in biological networks consist of numerous biomolecules and their complicated interactions. Recent studies have shown that biomolecules in a functional module tend to have similar interaction patterns and that such modules are often conserved across biological networks of different species. As a result, such conserved functional modules can be identified through comparative analysis of biological networks. In this work, we propose a novel network querying algorithm based on the CUFID (Comparative network analysis Using the steady-state network Flow to IDentify orthologous proteins) framework combined with an efficient seed-and-extension approach. The proposed algorithm, CUFID-query, can accurately detect conserved functional modules as small subnetworks in the target network that are expected to perform similar functions to the given query functional module. The CUFID framework was recently developed for probabilistic pairwise global comparison of biological networks, and it has been applied to pairwise global network alignment, where the framework was shown to yield accurate network alignment results. In the proposed CUFID-query algorithm, we adopt the CUFID framework and extend it for local network alignment, specifically to solve network querying problems. First, in the seed selection phase, the proposed method utilizes the CUFID framework to compare the query and the target networks and to predict the probabilistic node-to-node correspondence between the networks. Next, the algorithm selects and greedily extends the seed in the target network by iteratively adding nodes that have frequent interactions with other nodes in the seed network, in a way that the conductance of the extended network is maximally reduced. Finally, CUFID-query removes irrelevant nodes from the querying results based on the personalized PageRank vector for the induced network that includes the fully extended network and its neighboring nodes. Through extensive performance evaluation based on biological networks with known functional modules, we show that CUFID-query outperforms the existing state-of-the-art algorithms in terms of prediction accuracy and biological significance of the predictions.
Siaw, Fei-Lu; Chong, Kok-Keong
2013-01-01
This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%.
Schultz, Peter A.
2016-03-01
For the purposes of making reliable first-principles predictions of defect energies in semiconductors, it is crucial to distinguish between effective-mass-like defects, which cannot be treated accurately with existing supercell methods, and deep defects, for which density functional theory calculations can yield reliable predictions of defect energy levels. The gallium antisite defect GaAs is often associated with the 78/203 meV shallow double acceptor in Ga-rich gallium arsenide. Within a conceptual framework of level patterns, analyses of structure and spin stabilization can be used within a supercell approach to distinguish localized deep defect states from shallow acceptors such as B As. Thismore » systematic approach determines that the gallium antisite supercell results has signatures inconsistent with an effective mass state and cannot be the 78/203 shallow double acceptor. Lastly, the properties of the Ga antisite in GaAs are described, total energy calculations that explicitly map onto asymptotic discrete localized bulk states predict that the Ga antisite is a deep double acceptor and has at least one deep donor state.« less
A Systematic Method of Interconnection Optimization for Dense-Array Concentrator Photovoltaic System
Siaw, Fei-Lu
2013-01-01
This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823
Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.
2016-01-01
Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915
Towards predicting the encoding capability of MR fingerprinting sequences.
Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P
2017-09-01
Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.
Development of Dimensionless Surge Response Functions for Hazard Assessment at Panama City, Florida
NASA Astrophysics Data System (ADS)
Taylor, N. R.; Irish, J. L.; Hagen, S. C.; Kaihatu, J. M.; McLaughlin, P. W.
2013-12-01
Reliable and robust methods of extreme value analysis in hurricane surge forecasting are of high importance in the coastal engineering profession. The Joint Probability Method (JPM) has become the preferred statistical method over the Historical Surge Population (HSP) method, due to its ability to give more accurate surge predictions, as demonstrated by Irish et. al in 2011 (J. Geophys. Res.). One disadvantage to this method is its high computational cost; a single location can require hundreds of simulated storms, each needing one thousand computational hours or more to complete. One way of overcoming this issue is to use an interpolating function, called a surge response function, to reduce the required number of simulations to a manageable number. These sampling methods, which use physical scaling laws, have been shown to significantly reduce the number of simulated storms needed for application of the JPM method. In 2008, Irish et. al. (J. Phys. Oceanogr.) demonstrated that hurricane surge scales primarily as a function of storm size and intensity. Additionally, Song et. al. in 2012 (Nat. Hazards) has shown that surge response functions incorporating bathymetric variations yield highly accurate surge estimates along the Texas coastline. This study applies the Song. et. al. model to 73 stations along the open coast, and 273 stations within the bays, in Panama City, Florida. The model performs well for the open coast and bay areas; surge levels at most stations along the open coast were predicted with RMS errors below 0.40 meters, and R2 values at or above 0.80. The R2 values for surge response functions within bays were consistently at or above 0.75. Surge levels at most stations within the North Bay and East Bay were predicted with RMS errors below 0.40 meters; within the West Bay, surge was predicted with RMS errors below 0.52 meters. Accurately interpolating surge values along the Panama City coast and bays enables efficient use of the JPM model in order to develop reliable probabilistic surge estimates for use in planning and design for hurricane mitigation.
NASA Technical Reports Server (NTRS)
Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.
2015-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.
Evaluating the sensitivity of agricultural model performance to different climate inputs
Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.
2017-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985
Optical diagnosis of malaria infection in human plasma using Raman spectroscopy
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Saleem, Muhammad; Amanat, Samina Tufail; Shakoor, Huma Abdul; Rashid, Rashad; Mahmood, Arshad; Ahmed, Mushtaq
2015-01-01
We present the prediction of malaria infection in human plasma using Raman spectroscopy. Raman spectra of malaria-infected samples are compared with those of healthy and dengue virus infected ones for disease recognition. Raman spectra were acquired using a laser at 532 nm as an excitation source and 10 distinct spectral signatures that statistically differentiated malaria from healthy and dengue-infected cases were found. A multivariate regression model has been developed that utilized Raman spectra of 20 malaria-infected, 10 non-malarial with fever, 10 healthy, and 6 dengue-infected samples to optically predict the malaria infection. The model yields the correlation coefficient r2 value of 0.981 between the predicted values and clinically known results of trainee samples, and the root mean square error in cross validation was found to be 0.09; both these parameters validated the model. The model was further blindly tested for 30 unknown suspected samples and found to be 86% accurate compared with the clinical results, with the inaccuracy due to three samples which were predicted in the gray region. Standard deviation and root mean square error in prediction for unknown samples were found to be 0.150 and 0.149, which are accepted for the clinical validation of the model.
Tile prediction schemes for wide area motion imagery maps in GIS
NASA Astrophysics Data System (ADS)
Michael, Chris J.; Lin, Bruce Y.
2017-11-01
Wide-area surveillance, traffic monitoring, and emergency management are just several of many applications benefiting from the incorporation of Wide-Area Motion Imagery (WAMI) maps into geographic information systems. Though the use of motion imagery as a GIS base map via the Web Map Service (WMS) standard is not a new concept, effectively streaming imagery is particularly challenging due to its large scale and the multidimensionally interactive nature of clients that use WMS. Ineffective streaming from a server to one or more clients can unnecessarily overwhelm network bandwidth and cause frustratingly large amounts of latency in visualization to the user. Seamlessly streaming WAMI through GIS requires good prediction to accurately guess the tiles of the video that will be traversed in the near future. In this study, we present an experimental framework for such prediction schemes by presenting a stochastic interaction model that represents a human user's interaction with a GIS video map. We then propose several algorithms by which the tiles of the stream may be predicted. Results collected both within the experimental framework and using human analyst trajectories show that, though each algorithm thrives under certain constraints, the novel Markovian algorithm yields the best results overall. Furthermore, we make the argument that the proposed experimental framework is sufficient for the study of these prediction schemes.
Effect of Turbulence Models on Two Massively-Separated Benchmark Flow Cases
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2003-01-01
Two massively-separated flow cases (the 2-D hill and the 3-D Ahmed body) were computed with several different turbulence models in the Reynolds-averaged Navier-Stokes code CFL3D as part of participation in a turbulence modeling workshop held in Poitiers, France in October, 2002. Overall, results were disappointing, but were consistent with results from other RANS codes and other turbulence models at the workshop. For the 2-D hill case, those turbulence models that predicted separation location accurately ended up yielding a too-long separation extent downstream. The one model that predicted a shorter separation extent in better agreement with LES data did so only by coincidence: its prediction of earlier reattachment was due to a too-late prediction of the separation location. For the Ahmed body, two slant angles were computed, and CFD performed fairly well for one of the cases (the larger slant angle). Both turbulence models tested in this case were very similar to each other. For the smaller slant angle, CFD predicted massive separation, whereas the experiment showed reattachment about half-way down the center of the face. These test cases serve as reminders that state- of-the-art CFD is currently not a reliable predictor of massively-separated flow physics, and that further validation studies in this area would be beneficial.
Barton, David J; Kumar, Raj G; McCullough, Emily H; Galang, Gary; Arenth, Patricia M; Berga, Sarah L; Wagner, Amy K
2016-01-01
To (1) examine relationships between persistent hypogonadotropic hypogonadism (PHH) and long-term outcomes after severe traumatic brain injury (TBI); and (2) determine whether subacute testosterone levels can predict PHH. Level 1 trauma center at a university hospital. Consecutive sample of men with severe TBI between 2004 and 2009. Prospective cohort study. Post-TBI blood samples were collected during week 1, every 2 weeks until 26 weeks, and at 52 weeks. Serum hormone levels were measured, and individuals were designated as having PHH if 50% or more of samples met criteria for hypogonadotropic hypogonadism. At 6 and 12 months postinjury, we assessed global outcome, disability, functional cognition, depression, and quality of life. We recruited 78 men; median (interquartile range) age was 28.5 (22-42) years. Thirty-four patients (44%) had PHH during the first year postinjury. Multivariable regression, controlling for age, demonstrated PHH status predicted worse global outcome scores, more disability, and reduced functional cognition at 6 and 12 months post-TBI. Two-step testosterone screening for PHH at 12 to 16 weeks postinjury yielded a sensitivity of 79% and specificity of 100%. PHH status in men predicts poor outcome after severe TBI, and PHH can accurately be predicted at 12 to 16 weeks.
Barton, David J.; Kumar, Raj G.; McCullough, Emily H.; Galang, Gary; Arenth, Patricia M.; Berga, Sarah L.; Wagner, Amy K.
2015-01-01
Objective (1) Examine relationships between persistent hypogonadotropic hypogonadism (PHH) and long-term outcomes after severe traumatic brain injury (TBI); (2) determine if sub-acute testosterone levels can predict PHH. Setting Level 1 trauma center at a university hospital. Participants Consecutive sample of men with severe TBI between 2004 and 2009. Design Prospective cohort study. Main Measures Post-TBI blood samples were collected during week 1, every 2 weeks until 26 weeks, and at 52 weeks. Serum hormone levels were measured, and individuals were designated as having PHH if ≥50% of samples met criteria for hypogonadotropic hypogonadism. At 6 and 12 months post-injury, we assessed global outcome, disability, functional cognition, depression, and quality-of-life. Results We recruited 78 men; median (IQR) age was 28.5 (22–42) years. 34 patients (44%) had PHH during the first year post-injury. Multivariable regression, controlling for age, demonstrated PHH status predicted worse global outcome scores, more disability, and reduced functional cognition at 6 and 12 months post-TBI. Two-step testosterone screening for PHH at 12–16 weeks post-injury yielded a sensitivity of 79% and specificity of 100%. Conclusion PHH status in men predicts poor outcome after severe TBI, and PHH can accurately be predicted at 12–16 weeks. PMID:26360007
Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set
Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.
1996-01-01
This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.
Growth and yield in Eucalyptus globulus
James A. Rinehart; Richard B. Standiford
1983-01-01
A study of the major Eucalyptus globulus stands throughout California conducted by Woodbridge Metcalf in 1924 provides a complete and accurate data set for generating variable site-density yield models. Two models were developed using linear regression techniques. Model I depicts a linear relationship between age and yield best used for stands between five and fifteen...
Application of a rising plate meter to estimate forage yield on dairy farms in Pennsylvania
USDA-ARS?s Scientific Manuscript database
Accurately assessing pasture forage yield is necessary for producers who want to budget feed expenses and make informed pasture management decisions. Clipping and weighing forage from a known area is a direct method to measure pasture forage yield, however it is time consuming. The rising plate mete...
Adjusting slash pine growth and yield for silvicultural treatments
Stephen R. Logan; Barry D. Shiver
2006-01-01
With intensive silvicultural treatments such as fertilization and competition control now commonplace in today's slash pine (Pinus elliottii Engelm.) plantations, a method to adjust current growth and yield models is required to accurately account for yield increases due to these practices. Some commonly used ad-hoc methods, such as raising site...
From the Lab to the Model: Using Historical Data to Forecast and Understand Harmful Algal Blooms
NASA Astrophysics Data System (ADS)
Doherty, O. M.; Gobler, C.; Hattenrath-Lehmann, T. K.; Griffith, A. W.; Davis, T. W.; Kang, Y.
2017-12-01
Ocean warming has expanded and shifted the niche of harmful algal blooms (HABs) in oceans and lakes globally. There is significant interest in using global climate models (GCMs) to predict future and ongoing shifts in HABs, however its unclear if our current understanding of HAB response to changing environmental conditions allow for sufficiently accurate predictions of HAB growth. Here we present an approach which uses resampling in conjunction with a meta-analysis of lab experiments to create robust and resilient models of HAB growth. Laboratory experiments yield a wide range of temperature growth rate responses and, as such, care must be taken to accurately convey the uncertainty of the relationship into any statistical model. Using high resolution sea surface temperature data, we produce probabilistic hindcasts of HAB growth rates and seasonal durations and compare to historical observations of HAB. Results from three studies will be presented: (1) showing expansion of the niche of and growth potential of Alexandrium fundyense and Dinophysis acuminate in the North Atlantic and North Pacific, (2) identifying shifts in the seasonality of and increases in growth potential of Cochlodinium polykrikoides in Long Island Sound and Chesapeake Bay and (3) reconstructing historical growth rates of multiple HAB species in Lake Erie. We conclude that warming ocean and lake temperature are an important factor facilitating the intensification of HABs and thus contributes to an expanding human health threat. Further, the success of this approach suggests that these ground truthed and experimentally constrained statistical models can be used as a basis for HAB predictions in GCMs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yun, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Cui, Wan-Zhao, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Wang, Hong-Guang
2015-05-15
Effects of the secondary electron emission (SEE) phenomenon of metal surface on the multipactor analysis of microwave components are investigated numerically and experimentally in this paper. Both the secondary electron yield (SEY) and the emitted energy spectrum measurements are performed on silver plated samples for accurate description of the SEE phenomenon. A phenomenological probabilistic model based on SEE physics is utilized and fitted accurately to the measured SEY and emitted energy spectrum of the conditioned surface material of microwave components. Specially, the phenomenological probabilistic model is extended to the low primary energy end lower than 20 eV mathematically, since no accuratemore » measurement data can be obtained. Embedding the phenomenological probabilistic model into the Electromagnetic Particle-In-Cell (EM-PIC) method, the electronic resonant multipacting in microwave components can be tracked and hence the multipactor threshold can be predicted. The threshold prediction error of the transformer and the coaxial filter is 0.12 dB and 1.5 dB, respectively. Simulation results demonstrate that the discharge threshold is strongly dependent on the SEYs and its energy spectrum in the low energy end (lower than 50 eV). Multipacting simulation results agree quite well with experiments in practical components, while the phenomenological probabilistic model fit both the SEY and the emission energy spectrum better than the traditionally used model and distribution. The EM-PIC simulation method with the phenomenological probabilistic model for the surface collision simulation has been demonstrated for predicting the multipactor threshold in metal components for space application.« less
NASA Astrophysics Data System (ADS)
Greenway, D. P.; Hackett, E.
2017-12-01
Under certain atmospheric refractivity conditions, propagated electromagnetic waves (EM) can become trapped between the surface and the bottom of the atmosphere's mixed layer, which is referred to as surface duct propagation. Being able to predict the presence of these surface ducts can reap many benefits to users and developers of sensing technologies and communication systems because they significantly influence the performance of these systems. However, the ability to directly measure or model a surface ducting layer is challenging due to the high spatial resolution and large spatial coverage needed to make accurate refractivity estimates for EM propagation; thus, inverse methods have become an increasingly popular way of determining atmospheric refractivity. This study uses data from the Coupled Ocean/Atmosphere Mesoscale Prediction System developed by the Naval Research Laboratory and instrumented helicopter (helo) measurements taken during the Wallops Island Field Experiment to evaluate the use of ensemble forecasts in refractivity inversions. Helo measurements and ensemble forecasts are optimized to a parametric refractivity model, and three experiments are performed to evaluate whether incorporation of ensemble forecast data aids in more timely and accurate inverse solutions using genetic algorithms. The results suggest that using optimized ensemble members as an initial population for the genetic algorithms generally enhances the accuracy and speed of the inverse solution; however, use of the ensemble data to restrict parameter search space yields mixed results. Inaccurate results are related to parameterization of the ensemble members' refractivity profile and the subsequent extraction of the parameter ranges to limit the search space.
Efficient Third Harmonic Generation for Wind Lidar Applications
NASA Technical Reports Server (NTRS)
Mordaunt, David W.; Cheung, Eric C.; Ho, James G.; Palese, Stephen P.
1998-01-01
The characterization of atmospheric winds on a global basis is a key parameter required for accurate weather prediction. The use of a space based lidar system for remote measurement of wind speed would provide detailed and highly accurate data for future weather prediction models. This paper reports the demonstration of efficient third harmonic conversion of a 1 micrometer laser to provide an ultraviolet (UV) source suitable for a wind lidar system based on atmospheric molecular scattering. Although infrared based lidars using aerosol scattering have been demonstrated to provide accurate wind measurement, a UV based system using molecular or Rayleigh scattering will provide accurate global wind measurements, even in those areas of the atmosphere where the aerosol density is too low to yield good infrared backscatter signals. The overall objective of this work is to demonstrate the maturity of the laser technology and its suitability for a near term flight aboard the space shuttle. The laser source is based on diode-pumped solid-state laser technology which has been extensively demonstrated at TRW in a variety of programs and internal development efforts. The pump laser used for the third harmonic demonstration is a breadboard system, designated the Laser for Risk Reduction Experiments (LARRE), which has been operating regularly for over 5 years. The laser technology has been further refined in an engineering model designated as the Compact Advanced Pulsed Solid-State Laser (CAPSSL), in which the laser head was packaged into an 8 x 8 x 18 inch volume with a weight of approximately 61 pounds. The CAPSSL system is a ruggedized configuration suitable for typical military applications. The LARRE and CAPSSL systems are based on Nd:YAG with an output wavelength of 1064 nm. The current work proves the viability of converting the Nd:YAG fundamental to the third harmonic wavelength at 355 nm for use in a direct detection wind lidar based on atmospheric Rayleigh scattering.
Liu, X. Sherry; Wang, Ji; Zhou, Bin; Stein, Emily; Shi, Xiutao; Adams, Mark; Shane, Elizabeth; Guo, X. Edward
2013-01-01
While high-resolution peripheral quantitative computed tomography (HR-pQCT) has advanced clinical assessment of trabecular bone microstructure, nonlinear microstructural finite element (μFE) prediction of yield strength by HR-pQCT voxel model is impractical for clinical use due to its prohibitively high computational costs. The goal of this study was to develop an efficient HR-pQCT-based plate and rod (PR) modeling technique to fill the unmet clinical need for fast bone strength estimation. By using individual trabecula segmentation (ITS) technique to segment the trabecular structure into individual plates and rods, a patient-specific PR model was implemented by modeling each trabecular plate with multiple shell elements and each rod with a beam element. To validate this modeling technique, predictions by HR-pQCT PR model were compared with those of the registered high resolution μCT voxel model of 19 trabecular sub-volumes from human cadaveric tibiae samples. Both Young’s modulus and yield strength of HR-pQCT PR models strongly correlated with those of μCT voxel models (r2=0.91 and 0.86). Notably, the HR-pQCT PR models achieved major reductions in element number (>40-fold) and CPU time (>1,200-fold). Then, we applied PR model μFE analysis to HR-pQCT images of 60 postmenopausal women with (n=30) and without (n=30) a history of vertebral fracture. HR-pQCT PR model revealed significantly lower Young’s modulus and yield strength at the radius and tibia in fracture subjects compared to controls. Moreover, these mechanical measurements remained significantly lower in fracture subjects at both sites after adjustment for aBMD T-score at the ultradistal radius or total hip. In conclusion, we validated a novel HR-pQCT PR model of human trabecular bone against μCT voxel models and demonstrated its ability to discriminate vertebral fracture status in postmenopausal women. This accurate nonlinear μFE prediction of HR-pQCT PR model, which requires only seconds of desktop computer time, has tremendous promise for clinical assessment of bone strength. PMID:23456922
An Investigation into the Relationship Between Distillate Yield and Stable Isotope Fractionation
NASA Astrophysics Data System (ADS)
Sowers, T.; Wagner, A. J.
2016-12-01
Recent breakthroughs in laser spectrometry have allowed for faster, more efficient analyses of stable isotopic ratios in water samples. Commercially available instruments from Los Gatos Research and Picarro allow users to quickly analyze a wide range of samples, from seawater to groundwater, with accurate isotope ratios of D/H to within ± 0.2 ‰ and 18O/16O to within ± 0.03 ‰. While these instruments have increased the efficiency of stable isotope laboratories, they come with some major limitations, such as not being able to analyze hypersaline waters. The Los Gatos Research Liquid Water Isotope Analyzer (LWIA) can accurately and consistently measure the stable isotope ratios in waters with salinities ranging from 0 to 4 grams per liter (0 to 40 parts per thousand). In order to analyze water samples with salinities greater than 4 grams per liter, however, it was necessary to develop a consistent method through which to reduce salinity while causing as little fractionation as possible. Using a consistent distillation method, predictable fractionation of δ 18O and δ 2 H values was found to occur. This fractionation occurs according to a linear relationship with respect to the percent yield of the water in the sample. Using this method, samples with high salinity can be analyzed using laser spectrometry instruments, thereby enabling laboratories with Los Gatos or Picarro instruments to analyze those samples in house without having to dilute them using labor-intensive in-house standards or expensive premade standards.
Hrabok, Marianne; Brooks, Brian L; Fay-McClymont, Taryn B; Sherman, Elisabeth M S
2014-01-01
The purpose of this article was to investigate the accuracy of the WISC-IV short forms in estimating Full Scale Intelligence Quotient (FSIQ) and General Ability Index (GAI) in pediatric epilepsy. One hundred and four children with epilepsy completed the WISC-IV as part of a neuropsychological assessment at a tertiary-level children's hospital. The clinical accuracy of eight short forms was assessed in two ways: (a) accuracy within +/- 5 index points of FSIQ and (b) the clinical classification rate according to Wechsler conventions. The sample was further subdivided into low FSIQ (≤ 80) and high FSIQ (> 80). All short forms were significantly correlated with FSIQ. Seven-subtest (Crawford et al. [2010] FSIQ) and 5-subtest (BdSiCdVcLn) short forms yielded the highest clinical accuracy rates (77%-89%). Overall, a 2-subtest (VcMr) short form yielded the lowest clinical classification rates for FSIQ (35%-63%). The short form yielding the most accurate estimate of GAI was VcSiMrBd (73%-84%). Short forms show promise as useful estimates. The 7-subtest (Crawford et al., 2010) and 5-subtest (BdSiVcLnCd) short forms yielded the most accurate estimates of FSIQ. VcSiMrBd yielded the most accurate estimate of GAI. Clinical recommendations are provided for use of short forms in pediatric epilepsy.
Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei
2017-10-01
To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.
Individual differences in transcranial electrical stimulation current density
Russell, Michael J; Goodman, Theodore; Pierson, Ronald; Shepherd, Shane; Wang, Qiang; Groshong, Bennett; Wiley, David F
2013-01-01
Transcranial electrical stimulation (TCES) is effective in treating many conditions, but it has not been possible to accurately forecast current density within the complex anatomy of a given subject's head. We sought to predict and verify TCES current densities and determine the variability of these current distributions in patient-specific models based on magnetic resonance imaging (MRI) data. Two experiments were performed. The first experiment estimated conductivity from MRIs and compared the current density results against actual measurements from the scalp surface of 3 subjects. In the second experiment, virtual electrodes were placed on the scalps of 18 subjects to model simulated current densities with 2 mA of virtually applied stimulation. This procedure was repeated for 4 electrode locations. Current densities were then calculated for 75 brain regions. Comparison of modeled and measured external current in experiment 1 yielded a correlation of r = .93. In experiment 2, modeled individual differences were greatest near the electrodes (ten-fold differences were common), but simulated current was found in all regions of the brain. Sites that were distant from the electrodes (e.g. hypothalamus) typically showed two-fold individual differences. MRI-based modeling can effectively predict current densities in individual brains. Significant variation occurs between subjects with the same applied electrode configuration. Individualized MRI-based modeling should be considered in place of the 10-20 system when accurate TCES is needed. PMID:24285948
Predictive validity of curriculum-based measurement and teacher ratings of academic achievement.
Kettler, Ryan J; Albers, Craig A
2013-08-01
Two alternative universal screening approaches to identify students with early learning difficulties were examined, along with a combination of these approaches. These approaches, consisting of (a) curriculum-based measurement (CBM) and (b) teacher ratings using Performance Screening Guides (PSGs), served as predictors of achievement tests in reading and mathematics. Participants included 413 students in grades 1, 2, and 3 in Tennessee (n=118) and Wisconsin (n=295) who were divided into six subsamples defined by grade and state. Reading and mathematics achievement tests with established psychometric properties were used as criteria within a concurrent and predictive validity framework. Across both achievement areas, CBM probes shared more variance with criterion measures than did teacher ratings, although teacher ratings added incremental validity among most subsamples. PSGs tended to be more accurate for identifying students in need of assistance at a 1-month interval, whereas CBM probes were more accurate at a 6-month interval. Teachers indicated that (a) false negatives are more problematic than are false positives, (b) both screening methods are useful for identifying early learning difficulties, and (c) both screening methods are useful for identifying students in need of interventions. Collectively, these findings suggest that the two types of measures, when used together, yield valuable information about students who need assistance in reading and mathematics. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
(abstract) Line Mixing Behavior of Hydrogen-Broadened Ammonia Under Jovian Atmospheric Conditions
NASA Technical Reports Server (NTRS)
Spilker, Thomas R.
1994-01-01
Laboratory spectral data reported last year have been used to investigate the line mixing behavior of hydrogen-broadened ammonia inversion lines. The data show that broadening parameters appearing in the modified Ben-Reuven opacity formalism of Berge and Gulkis (1976) cannot maintain constant values over pressure ranges that include low to moderate pressures and high pressures. Also, they cannot change drastically in value, as in the Spilker (1990) revision of the Berge and Gulkis formalism. It has long been recognized that at low pressures, less than about 1 bar of a Jovian atmospheric mixture, a VVW formalism yields more accurate predictions of ammonia opacity than Ben-Reuven formalisms. At higher pressures the Ben-Reuven formalisms are more accurate. Since the Ben-Reuven lineshape collapses to a VVW lineshape in the low pressure limit, this low pressure inaccuracy of the Ben-Reuven formalisms is surprising. By incorporating various behavior, a new formalism is produced that is more accurate than previous formalisms, particularly in the critical 'transition region' from 0.5 to 2 bars, and that can be used without discontinuity from pressures of zero to hundreds of bars. The new formalism will be useful in such applications as interpretation of radio astronomical and radio occultation data on giant planet atmospheres, and radiative transfer modeling of those atmospheres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Corey; Holmes, Joshua; Nibler, Joseph W.
2013-05-16
Combined high-resolution spectroscopic, electron-diffraction, and quantum theoretical methods are particularly advantageous for small molecules of high symmetry and can yield accurate structures that reveal subtle effects of electron delocalization on molecular bonds. The smallest of the radialene compounds, trimethylenecyclopropane, [3]-radialene, has been synthesized and examined in the gas phase by these methods. The first high-resolution infrared spectra have been obtained for this molecule of D3h symmetry, leading to an accurate B0 rotational constant value of 0.1378629(8) cm-1, within 0.5% of the value obtained from electronic structure calculations (density functional theory (DFT) B3LYP/cc-pVTZ). This result is employed in an analysis ofmore » electron-diffraction data to obtain the rz bond lengths (in Å): C-H = 1.072 (17), C-C = 1.437 (4), and C=C = 1.330 (4). The analysis does not lead to an accurate value of the HCH angle; however, from comparisons of theoretical and experimental angles for similar compounds, the theoretical prediction of 117.5° is believed to be reliable to within 2°. The effect of electron delocalization in radialene is to reduce the single C-C bond length by 0.07 Å compared to that in cyclopropane.« less
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
Self-rated health as a predictor of survival among patients with advanced cancer.
Shadbolt, Bruce; Barresi, Jane; Craft, Paul
2002-05-15
Evidence is emerging about the strong predictive relationship between self-rated health (SRH) and survival, although there is little evidence on palliative populations where an accurate prediction of survival is valuable. Thus, the relative importance of SRH in predicting the survival of ambulatory patients with advanced cancer was examined. SRH was compared to clinical assessments of performance status, as well as to quality-of-life measures. By use of a prospective cohort design, 181 patients (76% response rate) with advanced cancer were recruited into the study, resurveyed at 18 weeks, and observed to record deaths. The average age of patients was 62 years (SD = 12). The median survival time was 10 months. SRH was the strongest predictor of survival from baseline. Also, a Cox regression comparing changes in SRH over time yielded hazard ratios suggesting the relative risk (RR) of dying was greater for fair ratings at 18 weeks (approximately 3 times) compared with consistent good or better ratings; the RR was even greater (4.2 and 6.2 times) for poor ratings, especially when ratings were poor at baseline and 18 weeks (31 times). Improvement in SRH over time yielded the lowest RR. SRH is valid, reliable, and responsive to change as a predictor of survival of advanced cancer. These qualities suggest that SRH should be considered as an additional tool by oncologists to assess patients. Similarly, health managers could use SRH as an indicator of disease severity in palliative care case mix. Finally, SRH could provide a key to help us understand the human side of disease and its relationship with medicine.
Yield performance and stability of CMS-based triticale hybrids.
Mühleisen, Jonathan; Piepho, Hans-Peter; Maurer, Hans Peter; Reif, Jochen Christoph
2015-02-01
CMS-based triticale hybrids showed only marginal midparent heterosis for grain yield and lower dynamic yield stability compared to inbred lines. Hybrids of triticale (×Triticosecale Wittmack) are expected to possess outstanding yield performance and increased dynamic yield stability. The objectives of the present study were to (1) examine the optimum choice of the biometrical model to compare yield stability of hybrids versus lines, (2) investigate whether hybrids exhibit a more pronounced grain yield performance and yield stability, and (3) study optimal strategies to predict yield stability of hybrids. Thirteen female and seven male parental lines and their 91 factorial hybrids as well as 30 commercial lines were evaluated for grain yield in up to 20 environments. Hybrids were produced using a cytoplasmic male sterility (CMS)-inducing cytoplasm that originated from Triticumtimopheevii Zhuk. We found that the choice of the biometrical model can cause contrasting results and concluded that a group-by-environment interaction term should be added to the model when estimating stability variance of hybrids and lines. midparent heterosis for grain yield was on average 3 % with a range from -15.0 to 11.5 %. No hybrid outperformed the best inbred line. Hybrids had, on average, lower dynamic yield stability compared to the inbred lines. Grain yield performance of hybrids could be predicted based on midparent values and general combining ability (GCA)-predicted values. In contrast, stability variance of hybrids could be predicted only based on GCA-predicted values. We speculated that negative effects of the used CMS cytoplasm might be the reason for the low performance and yield stability of the hybrids. For this purpose a detailed study on the reasons for the drawback of the currently existing CMS system in triticale is urgently required comprising also the search of potentially alternative hybridization systems.
Negative impacts of climate change on cereal yields: statistical evidence from France
NASA Astrophysics Data System (ADS)
Gammans, Matthew; Mérel, Pierre; Ortiz-Bobea, Ariel
2017-05-01
In several world regions, climate change is predicted to negatively affect crop productivity. The recent statistical yield literature emphasizes the importance of flexibly accounting for the distribution of growing-season temperature to better represent the effects of warming on crop yields. We estimate a flexible statistical yield model using a long panel from France to investigate the impacts of temperature and precipitation changes on wheat and barley yields. Winter varieties appear sensitive to extreme cold after planting. All yields respond negatively to an increase in spring-summer temperatures and are a decreasing function of precipitation about historical precipitation levels. Crop yields are predicted to be negatively affected by climate change under a wide range of climate models and emissions scenarios. Under warming scenario RCP8.5 and holding growing areas and technology constant, our model ensemble predicts a 21.0% decline in winter wheat yield, a 17.3% decline in winter barley yield, and a 33.6% decline in spring barley yield by the end of the century. Uncertainty from climate projections dominates uncertainty from the statistical model. Finally, our model predicts that continuing technology trends would counterbalance most of the effects of climate change.
A translatable predictor of human radiation exposure.
Lucas, Joseph; Dressman, Holly K; Suchindran, Sunil; Nakamura, Mai; Chao, Nelson J; Himburg, Heather; Minor, Kerry; Phillips, Gary; Ross, Joel; Abedi, Majid; Terbrueggen, Robert; Chute, John P
2014-01-01
Terrorism using radiological dirty bombs or improvised nuclear devices is recognized as a major threat to both public health and national security. In the event of a radiological or nuclear disaster, rapid and accurate biodosimetry of thousands of potentially affected individuals will be essential for effective medical management to occur. Currently, health care providers lack an accurate, high-throughput biodosimetric assay which is suitable for the triage of large numbers of radiation injury victims. Here, we describe the development of a biodosimetric assay based on the analysis of irradiated mice, ex vivo-irradiated human peripheral blood (PB) and humans treated with total body irradiation (TBI). Interestingly, a gene expression profile developed via analysis of murine PB radiation response alone was inaccurate in predicting human radiation injury. In contrast, generation of a gene expression profile which incorporated data from ex vivo irradiated human PB and human TBI patients yielded an 18-gene radiation classifier which was highly accurate at predicting human radiation status and discriminating medically relevant radiation dose levels in human samples. Although the patient population was relatively small, the accuracy of this classifier in discriminating radiation dose levels in human TBI patients was not substantially confounded by gender, diagnosis or prior exposure to chemotherapy. We have further incorporated genes from this human radiation signature into a rapid and high-throughput chemical ligation-dependent probe amplification assay (CLPA) which was able to discriminate radiation dose levels in a pilot study of ex vivo irradiated human blood and samples from human TBI patients. Our results illustrate the potential for translation of a human genetic signature for the diagnosis of human radiation exposure and suggest the basis for further testing of CLPA as a candidate biodosimetric assay.
NASA Astrophysics Data System (ADS)
Rassoulinejad-Mousavi, Seyed Moein; Mao, Yijin; Zhang, Yuwen
2016-06-01
Choice of appropriate force field is one of the main concerns of any atomistic simulation that needs to be seriously considered in order to yield reliable results. Since investigations on the mechanical behavior of materials at micro/nanoscale have been becoming much more widespread, it is necessary to determine an adequate potential which accurately models the interaction of the atoms for desired applications. In this framework, reliability of multiple embedded atom method based interatomic potentials for predicting the elastic properties was investigated. Assessments were carried out for different copper, aluminum, and nickel interatomic potentials at room temperature which is considered as the most applicable case. Examined force fields for the three species were taken from online repositories of National Institute of Standards and Technology, as well as the Sandia National Laboratories, the LAMMPS database. Using molecular dynamic simulations, the three independent elastic constants, C11, C12, and C44, were found for Cu, Al, and Ni cubic single crystals. Voigt-Reuss-Hill approximation was then implemented to convert elastic constants of the single crystals into isotropic polycrystalline elastic moduli including bulk modulus, shear modulus, and Young's modulus as well as Poisson's ratio. Simulation results from massive molecular dynamic were compared with available experimental data in the literature to justify the robustness of each potential for each species. Eventually, accurate interatomic potentials have been recommended for finding each of the elastic properties of the pure species. Exactitude of the elastic properties was found to be sensitive to the choice of the force fields. Those potentials that were fitted for a specific compound may not necessarily work accurately for all the existing pure species. Tabulated results in this paper might be used as a benchmark to increase assurance of using the interatomic potential that was designated for a problem.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Chen, Fu; Sun, Huiyong; Wang, Junmei; Zhu, Feng; Liu, Hui; Wang, Zhe; Lei, Tailong; Li, Youyong; Hou, Tingjun
2018-06-21
Molecular docking provides a computationally efficient way to predict the atomic structural details of protein-RNA interactions (PRI), but accurate prediction of the three-dimensional structures and binding affinities for PRI is still notoriously difficult, partly due to the unreliability of the existing scoring functions for PRI. MM/PBSA and MM/GBSA are more theoretically rigorous than most scoring functions for protein-RNA docking, but their prediction performance for protein-RNA systems remains unclear. Here, we systemically evaluated the capability of MM/PBSA and MM/GBSA to predict the binding affinities and recognize the near-native binding structures for protein-RNA systems with different solvent models and interior dielectric constants (ϵ in ). For predicting the binding affinities, the predictions given by MM/GBSA based on the minimized structures in explicit solvent and the GBGBn1 model with ϵ in = 2 yielded the highest correlation with the experimental data. Moreover, the MM/GBSA calculations based on the minimized structures in implicit solvent and the GBGBn1 model distinguished the near-native binding structures within the top 10 decoys for 118 out of the 149 protein-RNA systems (79.2%). This performance is better than all docking scoring functions studied here. Therefore, the MM/GBSA rescoring is an efficient way to improve the prediction capability of scoring functions for protein-RNA systems. Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Yield of undamaged slash pine stands in South Florida
O. Gordon Langdon
1961-01-01
Predictions of future timber yields are necessary for formulating management plans and for comparing timber growing with alternative land uses. One useful tool for making these predictions is a set of yield tables.
The Robustness of Acoustic Analogies
NASA Technical Reports Server (NTRS)
Freund, J. B.; Lele, S. K.; Wei, M.
2004-01-01
Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.
Hanigan, David; Ferrer, Imma; Thurman, E Michael; Herckes, Pierre; Westerhoff, Paul
2017-02-05
N-Nitrosodimethylamine (NDMA) is carcinogenic in rodents and occurs in chloraminated drinking water and wastewater effluents. NDMA forms via reactions between chloramines and mostly unidentified, N-containing organic matter. We developed a mass spectrometry technique to identify NDMA precursors by analyzing 25 model compounds with LC/QTOF-MS. We searched isolates of 11 drinking water sources and 1 wastewater using a custom MATLAB ® program and extracted ion chromatograms for two fragmentation patterns that were specific to the model compounds. Once a diagnostic fragment was discovered, we conducted MS/MS during a subsequent injection to confirm the precursor ion. Using non-target searches and two diagnostic fragmentation patterns, we discovered 158 potential NDMA precursors. Of these, 16 were identified using accurate mass combined with fragment and retention time matches of analytical standards when available. Five of these sixteen NDMA precursors were previously unidentified in the literature, three of which were metabolites of pharmaceuticals. Except methadone, the newly identified precursors all had NDMA molar yields of less than 5%, indicating that NDMA formation could be additive from multiple compounds, each with low yield. We demonstrate that the method is applicable to other disinfection by-product precursors by predicting and verifying the fragmentation patterns for one nitrosodiethylamine precursor. Copyright © 2016. Published by Elsevier B.V.
Crop status evaluations and yield predictions
NASA Technical Reports Server (NTRS)
Haun, J. R.
1976-01-01
One phase of the large area crop inventory project is presented. Wheat yield models based on the input of environmental variables potentially obtainable through the use of space remote sensing were developed and demonstrated. By the use of a unique method for visually qualifying daily plant development and subsequent multifactor computer analyses, it was possible to develop practical models for predicting crop development and yield. Development of wheat yield prediction models was based on the discovery that morphological changes in plants are detected and quantified on a daily basis, and that this change during a portion of the season was proportional to yield.
Baudracco, J; Lopez-Villalobos, N; Holmes, C W; Comeron, E A; Macdonald, K A; Barry, T N; Friggens, N C
2012-06-01
This animal simulation model, named e-Cow, represents a single dairy cow at grazing. The model integrates algorithms from three previously published models: a model that predicts herbage dry matter (DM) intake by grazing dairy cows, a mammary gland model that predicts potential milk yield and a body lipid model that predicts genetically driven live weight (LW) and body condition score (BCS). Both nutritional and genetic drives are accounted for in the prediction of energy intake and its partitioning. The main inputs are herbage allowance (HA; kg DM offered/cow per day), metabolisable energy and NDF concentrations in herbage and supplements, supplements offered (kg DM/cow per day), type of pasture (ryegrass or lucerne), days in milk, days pregnant, lactation number, BCS and LW at calving, breed or strain of cow and genetic merit, that is, potential yields of milk, fat and protein. Separate equations are used to predict herbage intake, depending on the cutting heights at which HA is expressed. The e-Cow model is written in Visual Basic programming language within Microsoft Excel®. The model predicts whole-lactation performance of dairy cows on a daily basis, and the main outputs are the daily and annual DM intake, milk yield and changes in BCS and LW. In the e-Cow model, neither herbage DM intake nor milk yield or LW change are needed as inputs; instead, they are predicted by the e-Cow model. The e-Cow model was validated against experimental data for Holstein-Friesian cows with both North American (NA) and New Zealand (NZ) genetics grazing ryegrass-based pastures, with or without supplementary feeding and for three complete lactations, divided into weekly periods. The model was able to predict animal performance with satisfactory accuracy, with concordance correlation coefficients of 0.81, 0.76 and 0.62 for herbage DM intake, milk yield and LW change, respectively. Simulations performed with the model showed that it is sensitive to genotype by feeding environment interactions. The e-Cow model tended to overestimate the milk yield of NA genotype cows at low milk yields, while it underestimated the milk yield of NZ genotype cows at high milk yields. The approach used to define the potential milk yield of the cow and equations used to predict herbage DM intake make the model applicable for predictions in countries with temperate pastures.
Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R
2015-02-01
Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline.
Kim, Sungwon; Han, Kyunghwa; Seo, Nieun; Kim, Hye Jin; Kim, Myeong-Jin; Koom, Woong Sub; Ahn, Joong Bae; Lim, Joon Seok
2018-06-01
To evaluate the diagnostic value of signal intensity (SI)-selected volumetry findings in T2-weighted magnetic resonance imaging (MRI) as a potential biomarker for predicting pathological complete response (pCR) to preoperative chemoradiotherapy (CRT) in patients with rectal cancer. Forty consecutive patients with pCR after preoperative CRT were compared with 80 age- and sex-matched non-pCR patients in a case-control study. SI-selected tumor volume was measured on post-CRT T2-weighted MRI, which included voxels of the treated tumor exceeding the SI (obturator internus muscle SI + [ischiorectal fossa fat SI - obturator internus muscle SI] × 0.2). Three blinded readers independently rated five-point pCR confidence scores and compared the diagnostic outcome with SI-selected volumetry findings. The SI-selected volumetry protocol was validated in 30 additional rectal cancer patients. The area under the receiver-operating characteristic curve (AUC) of SI-selected volumetry for pCR prediction was 0.831, with an optimal cutoff value of 649.6 mm 3 (sensitivity 0.850, specificity 0.725). The AUC of the SI-selected tumor volume was significantly greater than the pooled AUC of readers (0.707, p < 0.001). At this cutoff, the validation trial yielded an accuracy of 0.87. SI-selected volumetry in post-CRT T2-weighted MRI can help predict pCR after preoperative CRT in patients with rectal cancer. • Fibrosis and viable tumor MRI signal intensities (SIs) are difficult to distinguish. • T2 SI-selected volumetry yields high diagnostic performance for assessing pathological complete response. • T2 SI-selected volumetry is significantly more accurate than readers and non-SI-selected volumetry. • Post-chemoradiation therapy T2-weighted MRI SI-selected volumetry facilitates prediction of pathological complete response.
Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R.
2015-01-01
Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273
Simulated yields for managed northern hardwood stands
Dale S. Solomon; William B. Leak; William B. Leak
1986-01-01
Board-foot and cubic-foot yields developed with the forest growth model SlMTlM are presented for northern hardwood stands grown with and without management. SIMTIM has been modified to include more accurate growth rates by species, a new stocking chart, and yields that reflect species values and quality classes. Treatments range from no thinning to intensive quality...
Efficiency and Accuracy in Thermal Simulation of Powder Bed Fusion of Bulk Metallic Glass
NASA Astrophysics Data System (ADS)
Lindwall, J.; Malmelöv, A.; Lundbäck, A.; Lindgren, L.-E.
2018-05-01
Additive manufacturing by powder bed fusion processes can be utilized to create bulk metallic glass as the process yields considerably high cooling rates. However, there is a risk that reheated material set in layers may become devitrified, i.e., crystallize. Therefore, it is advantageous to simulate the process to fully comprehend it and design it to avoid the aforementioned risk. However, a detailed simulation is computationally demanding. It is necessary to increase the computational speed while maintaining accuracy of the computed temperature field in critical regions. The current study evaluates a few approaches based on temporal reduction to achieve this. It is found that the evaluated approaches save a lot of time and accurately predict the temperature history.
Unbiased simulation of near-Clifford quantum circuits
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.; ...
2017-06-28
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
Simulations of a molecular plasma in collisional-radiative nonequilibrium
NASA Technical Reports Server (NTRS)
Cambier, Jean-Luc; Moreau, Stephane
1993-01-01
A code for the simulation of nonequilibrium plasmas is being developed, with the capability to couple the plasma fluid-dynamics for a single fluid with a collisional-radiative model, where electronic states are treated as separate species. The model allows for non-Boltzmann distribution of the electronic states. Deviations from the Boltzmann distributions are expected to occur in the rapidly ionizing regime behind a strong shock or in the recombining regime during a fast expansion. This additional step in modeling complexity is expected to yield more accurate predictions of the nonequilibrium state and the radiation spectrum and intensity. An attempt at extending the code to molecular plasma flows is presented. The numerical techniques used, the thermochemical model, and the results of some numerical tests are described.
NASA Technical Reports Server (NTRS)
Hodge, W. F.
1972-01-01
A numerical evaluation and an analysis of the effects of environmental disturbance torques on the attitude of a hexagonal cylinder rolling wheel spacecraft were performed. The resulting perturbations caused by five such torques were found to be very small and exhibited linearity such that linearized equations of motion yielded accurate results over short periods and the separate perturbations contributed by each torque were additive in the sense of superposition. Linearity of the torque perturbations was not affected by moderate system design changes and persisted for torque-to-angular momentum ratios up to 100 times the nominal expected value. As these conditions include many possible applications, similar linear behavior might be anticipated for other rolling-wheel spacecraft.
High-precision QCD at hadron colliders:electroweak gauge boson rapidity distributions at NNLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasiou, C.
2004-01-05
We compute the rapidity distributions of W and Z bosons produced at the Tevatron and the LHC through next-to-next-to leading order in QCD. Our results demonstrate remarkable stability with respect to variations of the factorization and renormalization scales for all values of rapidity accessible in current and future experiments. These processes are therefore ''gold-plated'': current theoretical knowledge yields QCD predictions accurate to better than one percent. These results strengthen the proposal to use $W$ and $Z$ production to determine parton-parton luminosities and constrain parton distribution functions at the LHC. For example, LHC data should easily be able to distinguish themore » central parton distribution fit obtained by MRST from that obtained by Alekhin.« less
LES-ODT Simulations of Turbulent Reacting Shear Layers
NASA Astrophysics Data System (ADS)
Hoffie, Andreas; Echekki, Tarek
2012-11-01
Large-eddy simulations (LES) combined with the one-dimensional turbulence (ODT) simulations of a spatially developing turbulent reacting shear layer with heat release and high Reynolds numbers were conducted and compared to results from direct numerical simulations (DNS) of the same configuration. The LES-ODT approach is based on LES solutions for momentum on a coarse grid and solutions for momentum and reactive scalars on a fine ODT grid, which is embedded in the LES computational domain. The shear layer is simulated with a single-step, second-order reaction with an Arrhenius reaction rate. The transport equations are solved using a low Mach number approximation. The LES-ODT simulations yield reasonably accurate predictions of turbulence and passive/reactive scalars' statistics compared to DNS results.
NASA Technical Reports Server (NTRS)
Blad, B. L.; Norman, J. M.; Gardner, B. R.
1983-01-01
The experimental design, data acquisition and analysis procedures for agronomic and reflectance data acquired over corn and soybeans at the Sandhills Agricultural Laboratory of the University of Nebraska are described. The following conclusions were reached: (1) predictive leaf area estimation models can be defined which appear valid over a wide range of soils; (2) relative grain yield estimates over moisture stressed corn were improved by combining reflectance and thermal data; (3) corn phenology estimates using the model of Badhwar and Henderson (1981) exhibited systematic bias but were reasonably accurate; (4) canopy reflectance can be modelled to within approximately 10% of measured values; and (5) soybean pubescence significantly affects canopy reflectance, energy balance and water use relationships.
ECG Signal Analysis and Arrhythmia Detection using Wavelet Transform
NASA Astrophysics Data System (ADS)
Kaur, Inderbir; Rajni, Rajni; Marwaha, Anupma
2016-12-01
Electrocardiogram (ECG) is used to record the electrical activity of the heart. The ECG signal being non-stationary in nature, makes the analysis and interpretation of the signal very difficult. Hence accurate analysis of ECG signal with a powerful tool like discrete wavelet transform (DWT) becomes imperative. In this paper, ECG signal is denoised to remove the artifacts and analyzed using Wavelet Transform to detect the QRS complex and arrhythmia. This work is implemented in MATLAB software for MIT/BIH Arrhythmia database and yields the sensitivity of 99.85 %, positive predictivity of 99.92 % and detection error rate of 0.221 % with wavelet transform. It is also inferred that DWT outperforms principle component analysis technique in detection of ECG signal.
Toward Agent Programs with Circuit Semantics
NASA Technical Reports Server (NTRS)
Nilsson, Nils J.
1992-01-01
New ideas are presented for computing and organizing actions for autonomous agents in dynamic environments-environments in which the agent's current situation cannot always be accurately discerned and in which the effects of actions cannot always be reliably predicted. The notion of 'circuit semantics' for programs based on 'teleo-reactive trees' is introduced. Program execution builds a combinational circuit which receives sensory inputs and controls actions. These formalisms embody a high degree of inherent conditionality and thus yield programs that are suitably reactive to their environments. At the same time, the actions computed by the programs are guided by the overall goals of the agent. The paper also speculates about how programs using these ideas could be automatically generated by artificial intelligence planning systems and adapted by learning methods.
Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model.
Wang, Sheng; Sun, Siqi; Li, Zhen; Zhang, Renyu; Xu, Jinbo
2017-01-01
Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. http://raptorx.uchicago.edu/ContactMap/.
Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
Li, Zhen; Zhang, Renyu
2017-01-01
Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. Availability http://raptorx.uchicago.edu/ContactMap/ PMID:28056090
Anisotropic nature of radially strained metal tubes
NASA Astrophysics Data System (ADS)
Strickland, Julie N.
Metal pipes are sometimes swaged by a metal cone to enlarge them, which increases the strain in the material. The amount of strain is important because it affects the burst and collapse strength. Burst strength is the amount of internal pressure that a pipe can withstand before failure, while collapse strength is the amount of external pressure that a pipe can withstand before failure. If the burst or collapse strengths are exceeded, the pipe may fracture, causing critical failure. Such an event could cost the owners and their customers millions of dollars in clean up, repair, and lost time, in addition to the potential environmental damage. Therefore, a reliable way of estimating the burst and collapse strength of strained pipe is desired and valuable. The sponsor currently rates strained pipes using the properties of raw steel, because those properties are easily measured (for example, yield strength). In the past, the engineers assumed that the metal would be work-hardened when swaged, so that yield strength would increase. However, swaging introduces anisotropic strain, which may decrease the yield strength. This study measured the yield strength of strained material in the transverse and axial direction and compared them to raw material, to determine the amount of anisotropy. This information will be used to more accurately determine burst and collapse ratings for strained pipes. More accurate ratings mean safer products, which will minimize risk for the sponsor's customers. Since the strained metal has a higher yield strength than the raw material, using the raw yield strength to calculate burst and collapse ratings is a conservative method. The metal has even higher yield strength after strain aging, which indicates that the stresses are relieved. Even with the 12% anisotropy in the strained and 9% anisotropy in the strain aged specimens, the raw yield strengths are lower and therefore more conservative. I recommend that the sponsor continue using the raw yield strength to calculate these ratings. I set out to characterize the anisotropic nature of swaged metal. As expected, the tensile tests showed a difference between the axial and transverse tensile strength. The correlation was 12% difference in yield strength in the axial and transverse directions for strained material and 9% in strained and aged material. This means that the strength of the metal in the hoop (transverse) direction is approximately 10% stronger than in the axial direction, because the metal was work hardened during the swaging process. Therefore, the metal is more likely to fail in axial tension than in burst or collapse. I presented the findings from the microstructure examination, standard tensile tests, and SEM data. All of this data supported the findings of the mini-tensile tests. This information will help engineers set burst and collapse ratings and allow material scientists to predict the anisotropic characteristics of swaged steel tubes.
On the Yield Strength of Oceanic Lithosphere
NASA Astrophysics Data System (ADS)
Jain, C.; Korenaga, J.; Karato, S. I.
2017-12-01
The origin of plate tectonic convection on Earth is intrinsically linked to the reduction in the strength of oceanic lithosphere at plate boundaries. A few mechanisms, such as deep thermal cracking [Korenaga, 2007] and strain localization due to grain-size reduction [e.g., Ricard and Bercovici, 2009], have been proposed to explain this reduction in lithospheric strength, but the significance of these mechanisms can be assessed only if we have accurate estimates on the strength of the undamaged oceanic lithosphere. The Peierls mechanism is likely to govern the rheology of old oceanic lithosphere [Kohlstedt et al., 1995], but the flow-law parameters for the Peierls mechanism suggested by previous studies do not agree with each other. We thus reanalyze the relevant experimental deformation data of olivine aggregates using Markov chain Monte Carlo inversion, which can handle the highly nonlinear constitutive equation of the Peierls mechanism [Korenaga and Karato, 2008; Mullet et al., 2015]. Our inversion results indicate nontrivial nonuniqueness in every flow-law parameter for the Peierls mechanism. Moreover, the resultant flow laws, all of which are consistent with the same experimental data, predict substantially different yield stresses under lithospheric conditions and could therefore have different implications for the origin of plate tectonics. We discuss some future directions to improve our constraints on lithospheric yield strength.
Application of activated barrier hopping theory to viscoplastic modeling of glassy polymers
NASA Astrophysics Data System (ADS)
Sweeney, J.; Spencer, P. E.; Vgenopoulos, D.; Babenko, M.; Boutenel, F.; Caton-Rose, P.; Coates, P. D.
2018-05-01
An established statistical mechanical theory of amorphous polymer deformation has been incorporated as a plastic mechanism into a constitutive model and applied to a range of polymer mechanical deformations. The temperature and rate dependence of the tensile yield of PVC, as reported in early studies, has been modeled to high levels of accuracy. Tensile experiments on PET reported here are analyzed similarly and good accuracy is also achieved. The frequently observed increase in the gradient of the plot of yield stress against logarithm of strain rate is an inherent feature of the constitutive model. The form of temperature dependence of the yield that is predicted by the model is found to give an accurate representation. The constitutive model is developed in two-dimensional form and implemented as a user-defined subroutine in the finite element package ABAQUS. This analysis is applied to the tensile experiments on PET, in some of which strain is localized in the form of shear bands and necks. These deformations are modeled with partial success, though adiabatic heating of the instability causes inaccuracies for this isothermal implementation of the model. The plastic mechanism has advantages over the Eyring process, is equally tractable, and presents no particular difficulties in implementation with finite elements.