NASA Astrophysics Data System (ADS)
Kurukuri, Srihari; Worswick, Michael J.
2013-12-01
An alternative approach is proposed to utilize symmetric yield functions for modeling the tension-compression asymmetry commonly observed in hcp materials. In this work, the strength differential (SD) effect is modeled by choosing separate symmetric plane stress yield functions (for example, Barlat Yld 2000-2d) for the tension i.e., in the first quadrant of principal stress space, and compression i.e., third quadrant of principal stress space. In the second and fourth quadrants, the yield locus is constructed by adopting interpolating functions between uniaxial tensile and compressive stress states. In this work, different interpolating functions are chosen and the predictive capability of each approach is discussed. The main advantage of this proposed approach is that the yield locus parameters are deterministic and relatively easy to identify when compared to the Cazacu family of yield functions commonly used for modeling SD effect observed in hcp materials.
NASA Astrophysics Data System (ADS)
Sela, S.; Woodbury, P. B.; van Es, H. M.
2018-05-01
The US Midwest is the largest and most intensive corn (Zea mays, L.) production region in the world. However, N losses from corn systems cause serious environmental impacts including dead zones in coastal waters, groundwater pollution, particulate air pollution, and global warming. New approaches to reducing N losses are urgently needed. N surplus is gaining attention as such an approach for multiple cropping systems. We combined experimental data from 127 on-farm field trials conducted in seven US states during the 2011–2016 growing seasons with biochemical simulations using the PNM model to quantify the benefits of a dynamic location-adapted management approach to reduce N surplus. We found that this approach allowed large reductions in N rate (32%) and N surplus (36%) compared to existing static approaches, without reducing yield and substantially reducing yield-scaled N losses (11%). Across all sites, yield-scaled N losses increased linearly with N surplus values above ~48 kg ha‑1. Using the dynamic model-based N management approach enabled growers to get much closer to this target than using existing static methods, while maintaining yield. Therefore, this approach can substantially reduce N surplus and N pollution potential compared to static N management.
NASA Astrophysics Data System (ADS)
Moshtaghi, Mehrdad; Adla, Soham; Pande, Saket; Disse, Markus; Savenije, Hubert
2017-04-01
The concept of sustainability is central to smallholder agriculture as subsistence farming is constantly impacted by livelihood insecurity and is constrained by access to capital, water technology and alternative employment opportunities. This study compares two approaches which aim at quantifying smallholder sustainability but differ in their underlying principles, methodologies for assessment and reporting, and applications. The yield index based insurance can protect the smallholder agriculture and help it to more economic sustainability because the income of smallholder depends on selling crops and this insurance scheme is based on crop yields. In this research, the trigger of this insurance sets on the basis of yields in previous years. The crop yields are calculated every year through socio-hydrology modeling and smallholder can get indemnity when crop yields are lower than average of previous five years (a crop failure). The FAO Sustainability Assessment of Food and Agriculture (SAFA) is an inclusive and comprehensive framework for sustainability assessment in the food and agricultural sector. It follows the UN definition of the 4 dimensions of sustainability (good governance, environmental integrity, economic resilience and social well-being) and includes 21 themes and 58 sub-themes with a multi-indicator approach. The direct sustainability corresponding to the FAO SAFA economic resilience dimension is compared with the indirect notion of sustainability derived from the yield based index insurance. A semi-synthetic comparison is conducted to understand the differences in the underlying principles, methodologies and application of the two approaches. Both approaches are applied to data from smallholder regions of Marathwada in Maharashtra (India) which experienced a severe rise in farmer suicides in the 2000s which has been attributed to a combination of socio-hydrological factors.
Shirsath, S R; Sable, S S; Gaikwad, S G; Sonawane, S H; Saini, D R; Gogate, P R
2017-09-01
Curcumin, a dietary phytochemical, has been extracted from rhizomes of Curcuma amada using ultrasound assisted extraction (UAE) and the results compared with the conventional extraction approach to establish the process intensification benefits. The effect of operating parameters such as type of solvent, extraction time, extraction temperature, solid to solvent ratio, particle size and ultrasonic power on the extraction yield have been investigated in details for the approach UAE. The maximum extraction yield as 72% was obtained in 1h under optimized conditions of 35°C temperature, solid to solvent ratio of 1:25, particle size of 0.09mm, ultrasonic power of 250W and ultrasound frequency of 22kHz with ethanol as the solvent. The obtained yield was significantly higher as compared to the batch extraction where only about 62% yield was achieved in 8h of treatment. Peleg's model was used to describe the kinetics of UAE and the model showed a good agreement with the experimental results. Overall, ultrasound has been established to be a green process for extraction of curcumin with benefits of reduction in time as compared to batch extraction and the operating temperature as compared to Soxhlet extraction. Copyright © 2017. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco;
2017-01-01
Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in temperature (minus 2 to plus 9 degrees Centigrade) and precipitation (minus 50 to plus 50 percent). Model results were analysed by plotting them as impact response surfaces (IRSs), classifying the IRS patterns of individual model simulations, describing these classes and analysing factors that may explain the major differences in model responses. The model ensemble was used to simulate yields of winter and spring wheat at four sites in Finland, Germany and Spain. Results were plotted as IRSs that show changes in yields relative to the baseline with respect to temperature and precipitation. IRSs of 30-year means and selected extreme years were classified using two approaches describing their pattern. The expert diagnostic approach (EDA) combines two aspects of IRS patterns: location of the maximum yield (nine classes) and strength of the yield response with respect to climate (four classes), resulting in a total of 36 combined classes defined using criteria pre-specified by experts. The statistical diagnostic approach (SDA) groups IRSs by comparing their pattern and magnitude, without attempting to interpret these features. It applies a hierarchical clustering method, grouping response patterns using a distance metric that combines the spatial correlation and Euclidian distance between IRS pairs. The two approaches were used to investigate whether different patterns of yield response could be related to different properties of the crop models, specifically their genealogy, calibration and process description. Although no single model property across a large model ensemble was found to explain the integrated yield response to temperature and precipitation perturbations, the application of the EDA and SDA approaches revealed their capability to distinguish: (i) stronger yield responses to precipitation for winter wheat than spring wheat; (ii) differing strengths of response to climate changes for years with anomalous weather conditions compared to period-average conditions; (iii) the influence of site conditions on yield patterns; (iv) similarities in IRS patterns among models with related genealogy; (v) similarities in IRS patterns for models with simpler process descriptions of root growth and water uptake compared to those with more complex descriptions; and (vi) a closer correspondence of IRS patterns in models using partitioning schemes to represent yield formation than in those using a harvest index. Such results can inform future crop modelling studies that seek to exploit the diversity of multi-model ensembles, by distinguishing ensemble members that span a wide range of responses as well as those that display implausible behaviour or strong mutual similarities.
Iyer, Sneha R; Gogate, Parag R
2017-01-01
The current work investigates the application of low intensity ultrasonic irradiation for improving the cooling crystallization of Mefenamic Acid for the first time. The crystal shape and size has been analyzed with the help of optical microscope and image analysis software respectively. The effect of ultrasonic irradiation on crystal size, particle size distribution (PSD) and yield has been investigated, also establishing the comparison with conventional approach. It has been observed that application of ultrasound not only enhances the yield but also reduces the induction time for crystallization as compared to conventional cooling crystallization technique. In the presence of ultrasound, the maximum yield was obtained at optimum conditions of power dissipation of 30W and ultrasonic irradiation time of 10min. The yield was further improved by application of ultrasound in cycles where the formed crystals are allowed to grow in the absence of ultrasonic irradiation. It was also observed that the desired crystal morphology was obtained for the ultrasound assisted crystallization. The conventionally obtained needle shaped crystals transformed into plate shaped crystals for the ultrasound assisted crystallization. The particle size distribution was analyzed using statistical means on the basis of skewness and kurtosis values. It was observed that the skewness and excess kurtosis value for ultrasound assisted crystallization was significantly lower as compared to the conventional approach. XRD analysis also revealed better crystal properties for the processed mefenamic acid using ultrasound assisted approach. The overall process intensification benefits of mefenamic acid crystallization using the ultrasound assisted approach were reduced particle size, increase in the yield and uniform PSD coupled with desired morphology. Copyright © 2016 Elsevier B.V. All rights reserved.
Comparative survey of dynamic analyses of free-piston Stirling engines
NASA Technical Reports Server (NTRS)
Kankam, M. D.; Rauch, J. S.
1991-01-01
Reported dynamics analyses for evaluating the steady-state response and stability of free-piston Stirling engine (FPSE) systems are compared. Various analytical approaches are discussed to provide guidance on their salient features. Recommendations are made in the recommendations remarks for an approach which captures most of the inherent properties of the engine. Such an approach has the potential for yielding results which will closely match practical FPSE-load systems.
Identification of QRS complex in non-stationary electrocardiogram of sick infants.
Kota, S; Swisher, C B; Al-Shargabi, T; Andescavage, N; du Plessis, A; Govindan, R B
2017-08-01
Due to the high-frequency of routine interventions in an intensive care setting, electrocardiogram (ECG) recordings from sick infants are highly non-stationary, with recurrent changes in the baseline, alterations in the morphology of the waveform, and attenuations of the signal strength. Current methods lack reliability in identifying QRS complexes (a marker of individual cardiac cycles) in the non-stationary ECG. In the current study we address this problem by proposing a novel approach to QRS complex identification. Our approach employs lowpass filtering, half-wave rectification, and the use of instantaneous Hilbert phase to identify QRS complexes in the ECG. We demonstrate the application of this method using ECG recordings from eight preterm infants undergoing intensive care, as well as from 18 normal adult volunteers available via a public database. We compared our approach to the commonly used approaches including Pan and Tompkins (PT), gqrs, wavedet, and wqrs for identifying QRS complexes and then compared each with manually identified QRS complexes. For preterm infants, a comparison between the QRS complexes identified by our approach and those identified through manual annotations yielded sensitivity and positive predictive values of 99% and 99.91%, respectively. The comparison metrics for each method are as follows: PT (sensitivity: 84.49%, positive predictive value: 99.88%), gqrs (85.25%, 99.49%), wavedet (95.24%, 99.86%), and wqrs (96.99%, 96.55%). Thus, the sensitivity values of the four methods previously described, are lower than the sensitivity of the method we propose; however, the positive predictive values of these other approaches is comparable to those of our method, with the exception of the wqrs approach, which yielded a slightly lower value. For adult ECG, our approach yielded a sensitivity of 99.78%, whereas PT yielded 99.79%. The positive predictive value was 99.42% for both our approach as well as for PT. We propose a novel method for identifying QRS complexes that outperforms common currently available tools for non-stationary ECG data in infants. For stationary ECG our proposed approach and the PT approach perform equally well. The ECG acquired in a clinical environment may be prone to issues related to non-stationarity, especially in critically ill patients. The approach proposed in this report offers superior reliability in these scenarios. Copyright © 2017 Elsevier Ltd. All rights reserved.
Resonant vibrations of a submerged beam
NASA Astrophysics Data System (ADS)
Achenbach, J. D.; Qu, J.
1986-03-01
Forced vibration of a simply supported submerged beam of circular cross section is investigated by the use of two mathematical methods. In the first approach the problem formulation is reduced to a singular integro-differential equation for the transverse deflection. In the second approach the method of matched asymptotic expansions is employed. The integro-differential equation is solved numerically, to yield an exact solution for the frequency response. Subsequent use of a representation integral yields the radiated far field acoustic pressure. The exact results for the beam deflection are compared with approximate results that are available in the literature. Next, a matched asymptotic expansion is worked out by constructing "inner" and "outer" expansions for frequencies near and not near resonance frequencies, respectively. The two expansions are matched in an appropriate manner to yield a uniformly valid solution. The leading term of the matched asymptotic solution is compared with exact numerical results.
Stacey, Paul E.; Greening, Holly; Kremer, James N.; Peterson, David; Tomasko, David A.; Valigura, Richard A.; Alexander, Richard B.; Castro, Mark S.; Meyers, Tilden P.; Paerl, Hans W.; Stacey, Paul E.; Turner, R. Eugene
2001-01-01
A NOAA project was initiated in 1998, with support from the U.S. EPA, to develop state-of-the-art estimates of atmospheric N deposition to estuarine watersheds and water surfaces and its delivery to the estuaries. Work groups were formed to address N deposition rates, indirect (from the watershed) yields from atmospheric and other anthropogenic sources, and direct deposition on the estuarine waterbodies, and to evaluate the levels of uncertainty within the estimates. Watershed N yields were estimated using both a land-use based process approach and a national (SPARROW) model, compared to each other, and compared to estimates of N yield from the literature. The total N yields predicted by the national model were similar to values found in the literature and the land-use derived estimates were consistently higher. Atmospheric N yield estimates were within a similar range for the two approaches, but tended to be higher in the land-use based estimates and were not wellcorrelated. Median atmospheric N yields were around 15% of the total N yield for both groups, but ranged as high as 60% when both direct and indirect deposition were considered. Although not the dominant source of anthropogenic N, atmospheric N is, and will undoubtedly continue to be, an important factor in culturally eutrophied estuarine systems, warranting additional research and management attention.
Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues
2014-06-01
The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P<.05). When used as an isolated criterion, MSTT>6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon structure was advantageous because it required a lower number of criteria to be assessed and demanded less time to assess diagnostic criteria and reach the diagnosis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene
2016-07-01
Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.
Numerical Approach for Goaf-Side Entry Layout and Yield Pillar Design in Fractured Ground Conditions
NASA Astrophysics Data System (ADS)
Jiang, Lishuai; Zhang, Peipeng; Chen, Lianjun; Hao, Zhen; Sainoki, Atsushi; Mitri, Hani S.; Wang, Qingbiao
2017-11-01
Entry driven along goaf-side (EDG), which is the development of an entry of the next longwall panel along the goaf-side and the isolation of the entry from the goaf with a small-width yield pillar, has been widely employed in China over the past several decades . The width of such a yield pillar has a crucial effect on EDG layout in terms of the ground control, isolation effect and resource recovery rate. Based on a case study, this paper presents an approach for evaluating, designing and optimizing EDG and yield pillar by considering the results from numerical simulations and field practice. To rigorously analyze the ground stability, the numerical study begins with the simulation of goaf-side stress and ground conditions. Four global models with identical conditions, except for the width of the yield pillar, are built, and the effect of pillar width on ground stability is investigated by comparing aspects of stress distribution, failure propagation, and displacement evolution during the entire service life of the entry. Based on simulation results, the isolation effect of the pillar acquired from field practice is also considered. The suggested optimal yield pillar design is validated using a field test in the same mine. Thus, the presented numerical approach provides references and can be utilized for the evaluation, design and optimization of EDG and yield pillars under similar geological and geotechnical circumstances.
Coupling constant for N*(1535)N{rho}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie Jujun; Graduate University of Chinese Academy of Sciences, Beijing 100049; Wilkin, Colin
2008-05-15
The value of the N*(1535)N{rho} coupling constant g{sub N*N{rho}} derived from the N*(1535){yields}N{rho}{yields}N{pi}{pi} decay is compared with that deduced from the radiative decay N*(1535){yields}N{gamma} using the vector-meson-dominance model. On the basis of an effective Lagrangian approach, we show that the values of g{sub N*N{rho}} extracted from the available experimental data on the two decays are consistent, though the error bars are rather large.
Teaching Real Business Cycles to Undergraduates
ERIC Educational Resources Information Center
Brevik, Frode; Gartner, Manfred
2007-01-01
The authors review the graphical approach to teaching the real business cycle model introduced in Barro. They then look at where this approach cuts corners and suggest refinements. Finally, they compare graphical and exact models by means of impulse-response functions. The graphical models yield reliable qualitative results. Sizable quantitative…
Explosion yield estimation from pressure wave template matching
Arrowsmith, Stephen; Bowman, Daniel
2017-01-01
A method for estimating the yield of explosions from shock-wave and acoustic-wave measurements is presented. The method exploits full waveforms by comparing pressure measurements against an empirical stack of prior observations using scaling laws. The approach can be applied to measurements across a wide-range of source-to-receiver distances. The method is applied to data from two explosion experiments in different regions, leading to mean relative errors in yield estimates of 0.13 using prior data from the same region, and 0.2 when applied to a new region. PMID:28618805
Davis, Brett; Birch, Gavin
2010-08-01
Trace metal export by stormwater runoff from a major road and local street in urban Sydney, Australia, is compared using pollutant yield rating curves derived from intensive sampling data. The event loads of copper, lead and zinc are well approximated by logarithmic relationships with respect to total event discharge owing to the reliable appearance of a first flush in pollutant mass loading from urban roads. Comparisons of the yield rating curves for these three metals show that copper and zinc export rates from the local street are comparable with that of the major road, while lead export from the local street is much higher, despite a 45-fold difference in traffic volume. The yield rating curve approach allows problematic environmental data to be presented in a simple yet meaningful manner with less information loss. Copyright 2010 Elsevier Ltd. All rights reserved.
Estimation of dew yield from radiative condensers by means of an energy balance model
NASA Astrophysics Data System (ADS)
Maestre-Valero, J. F.; Ragab, R.; Martínez-Alvarez, V.; Baille, A.
2012-08-01
SummaryThis paper presents an energy balance modelling approach to predict the nightly water yield and the surface temperature (Tf) of two passive radiative dew condensers (RDCs) tilted 30° from horizontal. One was fitted with a white hydrophilic polyethylene foil recommended for dew harvest and the other with a black polyethylene foil widely used in horticulture. The model was validated in south-eastern Spain by comparing the simulation outputs with field measurements of Tf and dew yield. The results indicate that the model is robust and accurate in reproducing the behaviour of the two RDCs, especially in what refers to Tf, whose estimates were very close to the observations. The results were somewhat less precise for dew yield, with a larger scatter around the 1:1 relationship. A sensitivity analysis showed that the simulated dew yield was highly sensitive to changes in relative humidity and downward longwave radiation. The proposed approach provides a useful tool to water managers for quantifying the amount of dew that could be harvested as a valuable water resource in arid, semiarid and water stressed regions.
Gorman, Jessica R; Roberts, Samantha C; Dominick, Sally A; Malcarne, Vanessa L; Dietz, Andrew C; Su, H Irene
2014-06-01
Purpose: Cancer survivors in their adolescent and young adult (AYA) years are an understudied population, possibly in part because of the high effort required to recruit them into research studies. The aim of this paper is to describe the specific recruitment strategies used in four studies recruiting AYA-aged female cancer survivors and to identify the highest yielding approaches. We also discuss challenges and recommendations. Methods: We recruited AYA-aged female cancer survivors for two studies conducted locally and two conducted nationally. Recruitment strategies included outreach and referral via: healthcare providers and clinics; social media and the internet; community and word of mouth; and a national fertility information hotline. We calculated the yield of each recruitment approach for the local and national studies by comparing the number that participated to the number of potential participants. Results: We recruited a total of 534 participants into four research studies. Seventy-one percent were diagnosed as young adults and 61% were within 3 years of their cancer diagnosis. The highest-yielding local recruitment strategy was healthcare provider and clinic referral. Nationally, social media and internet outreach yielded the highest rate of participation. Overall, internet-based recruitment resulted in the highest number and yield of participants. Conclusion: Our results suggest that outreach through social media and the internet are effective approaches to recruiting AYA-aged female cancer survivors. Forging collaborative relationships with survivor advocacy groups' members and healthcare providers also proved beneficial.
Gorman, Jessica R.; Roberts, Samantha C.; Dominick, Sally A.; Malcarne, Vanessa L.; Dietz, Andrew C.
2014-01-01
Purpose: Cancer survivors in their adolescent and young adult (AYA) years are an understudied population, possibly in part because of the high effort required to recruit them into research studies. The aim of this paper is to describe the specific recruitment strategies used in four studies recruiting AYA-aged female cancer survivors and to identify the highest yielding approaches. We also discuss challenges and recommendations. Methods: We recruited AYA-aged female cancer survivors for two studies conducted locally and two conducted nationally. Recruitment strategies included outreach and referral via: healthcare providers and clinics; social media and the internet; community and word of mouth; and a national fertility information hotline. We calculated the yield of each recruitment approach for the local and national studies by comparing the number that participated to the number of potential participants. Results: We recruited a total of 534 participants into four research studies. Seventy-one percent were diagnosed as young adults and 61% were within 3 years of their cancer diagnosis. The highest-yielding local recruitment strategy was healthcare provider and clinic referral. Nationally, social media and internet outreach yielded the highest rate of participation. Overall, internet-based recruitment resulted in the highest number and yield of participants. Conclusion: Our results suggest that outreach through social media and the internet are effective approaches to recruiting AYA-aged female cancer survivors. Forging collaborative relationships with survivor advocacy groups' members and healthcare providers also proved beneficial. PMID:24940529
Global Synthesis of Drought Effects on Maize and Wheat Production
Daryanto, Stefani; Wang, Lixin; Jacinthe, Pierre-André
2016-01-01
Drought has been a major cause of agricultural disaster, yet how it affects the vulnerability of maize and wheat production in combination with several co-varying factors (i.e., phenological phases, agro-climatic regions, soil texture) remains unclear. Using a data synthesis approach, this study aims to better characterize the effects of those co-varying factors with drought and to provide critical information on minimizing yield loss. We collected data from peer-reviewed publications between 1980 and 2015 which examined maize and wheat yield responses to drought using field experiments. We performed unweighted analysis using the log response ratio to calculate the bootstrapped confidence limits of yield responses and calculated drought sensitivities with regards to those co-varying factors. Our results showed that yield reduction varied with species, with wheat having lower yield reduction (20.6%) compared to maize (39.3%) at approximately 40% water reduction. Maize was also more sensitive to drought than wheat, particularly during reproductive phase and equally sensitive in the dryland and non-dryland regions. While no yield difference was observed among regions or different soil texture, wheat cultivation in the dryland was more prone to yield loss than in the non-dryland region. Informed by these results, we discuss potential causes and possible approaches that may minimize drought impacts. PMID:27223810
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi
ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less
Immobilized anaerobic fermentation for bio-fuel production by Clostridium co-culture.
Xu, Lei; Tschirner, Ulrike
2014-08-01
Clostridium thermocellum/Clostridium thermolacticum co-culture fermentation has been shown to be a promising way of producing ethanol from several carbohydrates. In this research, immobilization techniques using sodium alginate and alkali pretreatment were successfully applied on this co-culture to improve the bio-ethanol fermentation performance during consolidated bio-processing (CBP). The ethanol yield obtained increased by over 60 % (as a percentage of the theoretical maximum) as compared to free cell fermentation. For cellobiose under optimized conditions, the ethanol yields were approaching about 85 % of the theoretical efficiency. To examine the feasibility of this immobilization co-culture on lignocellulosic biomass conversion, untreated and pretreated aspen biomasses were also used for fermentation experiments. The immobilized co-culture shows clear benefits in bio-ethanol production in the CBP process using pretreated aspen. With a 3-h, 9 % NaOH pretreatment, the aspen powder fermentation yields approached 78 % of the maximum theoretical efficiency, which is almost twice the yield of the untreated aspen fermentation.
NASA Astrophysics Data System (ADS)
Biswas, A.; Sharma, S. P.
2012-12-01
Self-Potential anomaly is an important geophysical technique that measures the electrical potential due natural source of current in the Earth's subsurface. An inclined sheet type model is a very familiar structure associated with mineralization, fault plane, groundwater flow and many other geological features which exhibits self potential anomaly. A number of linearized and global inversion approaches have been developed for the interpretation of SP anomaly over different structures for various purposes. Mathematical expression to compute the forward response over a two-dimensional dipping sheet type structures can be described in three different ways using five variables in each case. Complexities in the inversion using three different forward approaches are different. Interpretation of self-potential anomaly using very fast simulated annealing global optimization has been developed in the present study which yielded a new insight about the uncertainty and equivalence in model parameters. Interpretation of the measured data yields the location of the causative body, depth to the top, extension, dip and quality of the causative body. In the present study, a comparative performance of three different forward approaches in the interpretation of self-potential anomaly is performed to assess the efficacy of the each approach in resolving the possible ambiguity. Even though each forward formulation yields the same forward response but optimization of different sets of variable using different forward problems poses different kinds of ambiguity in the interpretation. Performance of the three approaches in optimization has been compared and it is observed that out of three methods, one approach is best and suitable for this kind of study. Our VFSA approach has been tested on synthetic, noisy and field data for three different methods to show the efficacy and suitability of the best method. It is important to use the forward problem in the optimization that yields the best result without any ambiguity and smaller uncertainty. Keywords: SP anomaly, inclined sheet, 2D structure, forward problems, VFSA Optimization,
First-principles calculations of Ti and O NMR chemical shift tensors in ferroelectric perovskites
NASA Astrophysics Data System (ADS)
Pechkis, Daniel; Walter, Eric; Krakauer, Henry
2011-03-01
Complementary chemical shift calculations were carried out with embedded clusters, using quantum chemistry methods, and with periodic boundary conditions, using the GIPAW approach within the Quantum Espresso package. Compared to oxygen chemical shifts, δ̂ (O), cluster calculations for δ̂ (Ti) were found to be more sensitive to size effects, termination, and choice of gaussian-type atomic basis set, while GIPAW results were found to be more sensitive to the pseudopotential construction. The two approaches complemented each other in optimizing these factors. We show that the two approaches yield comparable chemical shifts for suitably converged simulations, and results are compared with available experimental measurements. Supported by ONR.
Analysis of atomic force microscopy data for surface characterization using fuzzy logic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Mousa, Amjed, E-mail: aalmousa@vt.edu; Niemann, Darrell L.; Niemann, Devin J.
2011-07-15
In this paper we present a methodology to characterize surface nanostructures of thin films. The methodology identifies and isolates nanostructures using Atomic Force Microscopy (AFM) data and extracts quantitative information, such as their size and shape. The fuzzy logic based methodology relies on a Fuzzy Inference Engine (FIE) to classify the data points as being top, bottom, uphill, or downhill. The resulting data sets are then further processed to extract quantitative information about the nanostructures. In the present work we introduce a mechanism which can consistently distinguish crowded surfaces from those with sparsely distributed structures and present an omni-directional searchmore » technique to improve the structural recognition accuracy. In order to demonstrate the effectiveness of our approach we present a case study which uses our approach to quantitatively identify particle sizes of two specimens each with a unique gold nanoparticle size distribution. - Research Highlights: {yields} A Fuzzy logic analysis technique capable of characterizing AFM images of thin films. {yields} The technique is applicable to different surfaces regardless of their densities. {yields} Fuzzy logic technique does not require manual adjustment of the algorithm parameters. {yields} The technique can quantitatively capture differences between surfaces. {yields} This technique yields more realistic structure boundaries compared to other methods.« less
Kirm, Benjamin; Magdevska, Vasilka; Tome, Miha; Horvat, Marinka; Karničar, Katarina; Petek, Marko; Vidmar, Robert; Baebler, Spela; Jamnik, Polona; Fujs, Štefan; Horvat, Jaka; Fonovič, Marko; Turk, Boris; Gruden, Kristina; Petković, Hrvoje; Kosec, Gregor
2013-12-17
Erythromycin is a medically important antibiotic, biosynthesized by the actinomycete Saccharopolyspora erythraea. Genes encoding erythromycin biosynthesis are organized in a gene cluster, spanning over 60 kbp of DNA. Most often, gene clusters encoding biosynthesis of secondary metabolites contain regulatory genes. In contrast, the erythromycin gene cluster does not contain regulatory genes and regulation of its biosynthesis has therefore remained poorly understood, which has for a long time limited genetic engineering approaches for erythromycin yield improvement. We used a comparative proteomic approach to screen for potential regulatory proteins involved in erythromycin biosynthesis. We have identified a putative regulatory protein SACE_5599 which shows significantly higher levels of expression in an erythromycin high-producing strain, compared to the wild type S. erythraea strain. SACE_5599 is a member of an uncharacterized family of putative regulatory genes, located in several actinomycete biosynthetic gene clusters. Importantly, increased expression of SACE_5599 was observed in the complex fermentation medium and at controlled bioprocess conditions, simulating a high-yield industrial fermentation process in the bioreactor. Inactivation of SACE_5599 in the high-producing strain significantly reduced erythromycin yield, in addition to drastically decreasing sporulation intensity of the SACE_5599-inactivated strains when cultivated on ABSM4 agar medium. In contrast, constitutive overexpression of SACE_5599 in the wild type NRRL23338 strain resulted in an increase of erythromycin yield by 32%. Similar yield increase was also observed when we overexpressed the bldD gene, a previously identified regulator of erythromycin biosynthesis, thereby for the first time revealing its potential for improving erythromycin biosynthesis. SACE_5599 is the second putative regulatory gene to be identified in S. erythraea which has positive influence on erythromycin yield. Like bldD, SACE_5599 is involved in morphological development of S. erythraea, suggesting a very close relationship between secondary metabolite biosynthesis and morphological differentiation in this organism. While the mode of action of SACE_5599 remains to be elucidated, the manipulation of this gene clearly shows potential for improvement of erythromycin production in S. erythraea in industrial setting. We have also demonstrated the applicability of the comparative proteomics approach for identifying new regulatory elements involved in biosynthesis of secondary metabolites in industrial conditions.
2013-01-01
Background Erythromycin is a medically important antibiotic, biosynthesized by the actinomycete Saccharopolyspora erythraea. Genes encoding erythromycin biosynthesis are organized in a gene cluster, spanning over 60 kbp of DNA. Most often, gene clusters encoding biosynthesis of secondary metabolites contain regulatory genes. In contrast, the erythromycin gene cluster does not contain regulatory genes and regulation of its biosynthesis has therefore remained poorly understood, which has for a long time limited genetic engineering approaches for erythromycin yield improvement. Results We used a comparative proteomic approach to screen for potential regulatory proteins involved in erythromycin biosynthesis. We have identified a putative regulatory protein SACE_5599 which shows significantly higher levels of expression in an erythromycin high-producing strain, compared to the wild type S. erythraea strain. SACE_5599 is a member of an uncharacterized family of putative regulatory genes, located in several actinomycete biosynthetic gene clusters. Importantly, increased expression of SACE_5599 was observed in the complex fermentation medium and at controlled bioprocess conditions, simulating a high-yield industrial fermentation process in the bioreactor. Inactivation of SACE_5599 in the high-producing strain significantly reduced erythromycin yield, in addition to drastically decreasing sporulation intensity of the SACE_5599-inactivated strains when cultivated on ABSM4 agar medium. In contrast, constitutive overexpression of SACE_5599 in the wild type NRRL23338 strain resulted in an increase of erythromycin yield by 32%. Similar yield increase was also observed when we overexpressed the bldD gene, a previously identified regulator of erythromycin biosynthesis, thereby for the first time revealing its potential for improving erythromycin biosynthesis. Conclusions SACE_5599 is the second putative regulatory gene to be identified in S. erythraea which has positive influence on erythromycin yield. Like bldD, SACE_5599 is involved in morphological development of S. erythraea, suggesting a very close relationship between secondary metabolite biosynthesis and morphological differentiation in this organism. While the mode of action of SACE_5599 remains to be elucidated, the manipulation of this gene clearly shows potential for improvement of erythromycin production in S. erythraea in industrial setting. We have also demonstrated the applicability of the comparative proteomics approach for identifying new regulatory elements involved in biosynthesis of secondary metabolites in industrial conditions. PMID:24341557
Case, Brett A; Hackel, Benjamin J
2016-08-01
Protein ligand charge can impact physiological delivery with charge reduction often benefiting performance. Yet neutralizing mutations can be detrimental to protein function. Herein, three approaches are evaluated to introduce charged-to-neutral mutations of three cations and three anions within an affibody engineered to bind epidermal growth factor receptor. These approaches-combinatorial library sorting or consensus design, based on natural homologs or library-sorted mutants-are used to identify mutations with favorable affinity, stability, and recombinant yield. Consensus design, based on 942 affibody homologs, yielded a mutant of modest function (Kd = 11 ±4 nM, Tm = 62°C, and yield = 4.0 ± 0.8 mg/L as compared to 5.3 ± 1.7 nM, 71°C, and 3.5 ± 0.3 mg/L for the parental affibody). Extension of consensus design to 10 additional mutants exhibited varied performance including a substantially improved mutant (Kd = 6.9 ± 1.4 nM, Tm = 71°C, and 12.7 ± 0.9 mg/L yield). Sorting a homolog-based combinatorial library of 7 × 10(5) mutants generated a distribution of mutants with lower stability and yield, but did identify one strongly binding variant (Kd = 1.2 ± 0.3 nM, Tm = 69°C, and 6.0 ± 0.4 mg/L yield). Synthetic consensus design, based on the amino acid distribution in functional library mutants, yielded higher affinities (P = 0.05) with comparable stabilities and yields. The best of four analyzed clones had Kd = 1.7 ± 0.5 nM, Tm = 68°C, and 7.0 ± 0.5 mg/L yield. While all three approaches were effective in creating targeted affibodies with six charged-to-neutral mutations, synthetic consensus design proved to be the most robust. Synthetic consensus design provides a valuable tool for ligand engineering, particularly in the context of charge manipulation. Biotechnol. Bioeng. 2016;113: 1628-1638. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Yield stress in amorphous solids: A mode-coupling-theory analysis
NASA Astrophysics Data System (ADS)
Ikeda, Atsushi; Berthier, Ludovic
2013-11-01
The yield stress is a defining feature of amorphous materials which is difficult to analyze theoretically, because it stems from the strongly nonlinear response of an arrested solid to an applied deformation. Mode-coupling theory predicts the flow curves of materials undergoing a glass transition and thus offers predictions for the yield stress of amorphous solids. We use this approach to analyze several classes of disordered solids, using simple models of hard-sphere glasses, soft glasses, and metallic glasses for which the mode-coupling predictions can be directly compared to the outcome of numerical measurements. The theory correctly describes the emergence of a yield stress of entropic nature in hard-sphere glasses, and its rapid growth as density approaches random close packing at qualitative level. By contrast, the emergence of solid behavior in soft and metallic glasses, which originates from direct particle interactions is not well described by the theory. We show that similar shortcomings arise in the description of the caging dynamics of the glass phase at rest. We discuss the range of applicability of mode-coupling theory to understand the yield stress and nonlinear rheology of amorphous materials.
A comparison of approaches for estimating bottom-sediment mass in large reservoirs
Juracek, Kyle E.
2006-01-01
Estimates of sediment and sediment-associated constituent loads and yields from drainage basins are necessary for the management of reservoir-basin systems to address important issues such as reservoir sedimentation and eutrophication. One method for the estimation of loads and yields requires a determination of the total mass of sediment deposited in a reservoir. This method involves a sediment volume-to-mass conversion using bulk-density information. A comparison of four computational approaches (partition, mean, midpoint, strategic) for using bulk-density information to estimate total bottom-sediment mass in four large reservoirs indicated that the differences among the approaches were not statistically significant. However, the lack of statistical significance may be a result of the small sample size. Compared to the partition approach, which was presumed to provide the most accurate estimates of bottom-sediment mass, the results achieved using the strategic, mean, and midpoint approaches differed by as much as ?4, ?20, and ?44 percent, respectively. It was concluded that the strategic approach may merit further investigation as a less time consuming and less costly alternative to the partition approach.
A Fast Goal Recognition Technique Based on Interaction Estimates
NASA Technical Reports Server (NTRS)
E-Martin, Yolanda; R-Moreno, Maria D.; Smith, David E.
2015-01-01
Goal Recognition is the task of inferring an actor's goals given some or all of the actor's observed actions. There is considerable interest in Goal Recognition for use in intelligent personal assistants, smart environments, intelligent tutoring systems, and monitoring user's needs. In much of this work, the actor's observed actions are compared against a generated library of plans. Recent work by Ramirez and Geffner makes use of AI planning to determine how closely a sequence of observed actions matches plans for each possible goal. For each goal, this is done by comparing the cost of a plan for that goal with the cost of a plan for that goal that includes the observed actions. This approach yields useful rankings, but is impractical for real-time goal recognition in large domains because of the computational expense of constructing plans for each possible goal. In this paper, we introduce an approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities. We show that this approach is much faster, but still yields high quality results.
Improving the monitoring of crop productivity using spaceborne solar-induced fluorescence.
Guan, Kaiyu; Berry, Joseph A; Zhang, Yongguang; Joiner, Joanna; Guanter, Luis; Badgley, Grayson; Lobell, David B
2016-02-01
Large-scale monitoring of crop growth and yield has important value for forecasting food production and prices and ensuring regional food security. A newly emerging satellite retrieval, solar-induced fluorescence (SIF) of chlorophyll, provides for the first time a direct measurement related to plant photosynthetic activity (i.e. electron transport rate). Here, we provide a framework to link SIF retrievals and crop yield, accounting for stoichiometry, photosynthetic pathways, and respiration losses. We apply this framework to estimate United States crop productivity for 2007-2012, where we use the spaceborne SIF retrievals from the Global Ozone Monitoring Experiment-2 satellite, benchmarked with county-level crop yield statistics, and compare it with various traditional crop monitoring approaches. We find that a SIF-based approach accounting for photosynthetic pathways (i.e. C3 and C4 crops) provides the best measure of crop productivity among these approaches, despite the fact that SIF sensors are not yet optimized for terrestrial applications. We further show that SIF provides the ability to infer the impacts of environmental stresses on autotrophic respiration and carbon-use-efficiency, with a substantial sensitivity of both to high temperatures. These results indicate new opportunities for improved mechanistic understanding of crop yield responses to climate variability and change. © 2015 John Wiley & Sons Ltd.
Improving the Monitoring of Crop Productivity Using Spaceborne Solar-Induced Fluorescence
NASA Technical Reports Server (NTRS)
Guan, Kaiyu; Berry, Joseph A.; Zhang, Yongguang; Joiner, Joanna; Guanter, Luis; Badgley, Grayson; Lobell, David B.
2015-01-01
Large-scale monitoring of crop growth and yield has important value for forecasting food production and prices and ensuring regional food security. A newly emerging satellite retrieval, solar-induced fluorescence (SIF) of chlorophyll, provides for the first time a direct measurement related to plant photosynthetic activity (i.e. electron transport rate). Here, we provide a framework to link SIF retrievals and crop yield, accounting for stoichiometry, photosynthetic pathways, and respiration losses. We apply this framework to estimate United States crop productivity for 2007-2012, where we use the spaceborne SIF retrievals from the Global Ozone Monitoring Experiment-2 satellite, benchmarked with county-level crop yield statistics, and compare it with various traditional crop monitoring approaches. We find that a SIF-based approach accounting for photosynthetic pathways (i.e. C3 and C4 crops) provides the best measure of crop productivity among these approaches, despite the fact that SIF sensors are not yet optimized for terrestrial applications. We further show that SIF provides the ability to infer the impacts of environmental stresses on autotrophic respiration and carbon-use-efficiency, with a substantial sensitivity of both to high temperatures. These results indicate new opportunities for improved mechanistic understanding of crop yield responses to climate variability and change.
Chattoraj, Sayantan; Bhugra, Chandan; Li, Zheng Jane; Sun, Changquan Calvin
2014-12-01
The nonisothermal crystallization kinetics of amorphous materials is routinely analyzed by statistically fitting the crystallization data to kinetic models. In this work, we systematically evaluate how the model-dependent crystallization kinetics is impacted by variations in the heating rate and the selection of the kinetic model, two key factors that can lead to significant differences in the crystallization activation energy (Ea ) of an amorphous material. Using amorphous felodipine, we show that the Ea decreases with increase in the heating rate, irrespective of the kinetic model evaluated in this work. The model that best describes the crystallization phenomenon cannot be identified readily through the statistical fitting approach because several kinetic models yield comparable R(2) . Here, we propose an alternate paired model-fitting model-free (PMFMF) approach for identifying the most suitable kinetic model, where Ea obtained from model-dependent kinetics is compared with those obtained from model-free kinetics. The most suitable kinetic model is identified as the one that yields Ea values comparable with the model-free kinetics. Through this PMFMF approach, nucleation and growth is identified as the main mechanism that controls the crystallization kinetics of felodipine. Using this PMFMF approach, we further demonstrate that crystallization mechanism from amorphous phase varies with heating rate. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology
NASA Astrophysics Data System (ADS)
Jin, Z.; Azzari, G.; Lobell, D. B.
2016-12-01
Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.
Lee, Ilgyu; Han, Jong-In
2015-06-01
Simultaneous treatment (combining with cell disruption and lipid extraction) using hydrodynamic cavitation (HC) was applied to Nannochloropsis salina to demonstrate a simple and integrated way to produce oil from wet microalgae. A high lipid yield from the HC (25.9-99.0%) was observed compared with autoclave (16.2-66.5%) and ultrasonication (5.4-26.9%) in terms of the specific energy input (500-10,000 kJ/kg). The optimal conditions for the simultaneous treatment were established using a statistical approach. The efficiency of the simultaneous method was also demonstrated by comparing each separate treatment. The maximum lipid yield (predicted: 45.9% and experimental: 45.5%) was obtained using 0.89% sulfuric acid with a cavitation number of 1.17 for a reaction time of 25.05 min via response surface methodology. Considering its comparable extractability, energy-efficiency, and potential for scale-up, HC may be a promising method to achieve industrial-scale microalgae operation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
A spin column-free approach to sodium hydroxide-based glycan permethylation.
Hu, Yueming; Borges, Chad R
2017-07-24
Glycan permethylation was introduced as a tool to facilitate the study of glycans in 1903. Since that time, permethylation procedures have been continually modified to improve permethylation efficiency and qualitative applicability. Typically, however, either laborious preparation steps or cumbersome and uneconomical spin columns have been needed to obtain decent permethylation yields on small glycan samples. Here we describe a spin column-free (SCF) glycan permethylation procedure that is applicable to both O- and N-linked glycans and can be employed upstream to intact glycan analysis by MALDI-MS, ESI-MS, or glycan linkage analysis by GC-MS. The SCF procedure involves neutralization of NaOH beads by acidified phosphate buffer, which eliminates the risk of glycan oxidative degradation and avoids the use of spin columns. Optimization of the new permethylation procedure provided high permethylation efficiency for both hexose (>98%) and HexNAc (>99%) residues-yields which were comparable to (or better than) those of some widely-used spin column-based procedures. A light vs. heavy labelling approach was employed to compare intact glycan yields from a popular spin-column based approach to the SCF approach. Recovery of intact N-glycans was significantly better with the SCF procedure (p < 0.05), but overall yield of O-glycans was similar or slightly diminished (p < 0.05 for tetrasaccharides or smaller). When the SCF procedure was employed upstream to hydrolysis, reduction and acetylation for glycan linkage analysis of pooled glycans from unfractionated blood plasma, analytical reproducibility was on par with that from previous spin column-based "glycan node" analysis results. When applied to blood plasma samples from stage III-IV breast cancer patients (n = 20) and age-matched controls (n = 20), the SCF procedure facilitated identification of three glycan nodes with significantly different distributions between the cases and controls (ROC c-statistics > 0.75; p < 0.01). In summary, the SCF permethylation procedure expedites and economizes both intact glycan analysis and linkage analysis of glycans from whole biospecimens.
A spin column-free approach to sodium hydroxide-based glycan permethylation†
Hu, Yueming; Borges, Chad R.
2018-01-01
Glycan permethylation was introduced as a tool to facilitate the study of glycans in 1903. Since that time, permethylation procedures have been continually modified to improve permethylation efficiency and qualitative applicability. Typically, however, either laborious preparation steps or cumbersome and uneconomical spin columns have been needed to obtain decent permethylation yields on small glycan samples. Here we describe a spin column-free (SCF) glycan permethylation procedure that is applicable to both O- and N-linked glycans and can be employed upstream to intact glycan analysis by MALDI-MS, ESI-MS, or glycan linkage analysis by GC-MS. The SCF procedure involves neutralization of NaOH beads by acidified phosphate buffer, which eliminates the risk of glycan oxidative degradation and avoids the use of spin columns. Optimization of the new permethylation procedure provided high permethylation efficiency for both hexose (>98%) and HexNAc (>99%) residues—yields which were comparable to (or better than) those of some widely-used spin column-based procedures. A light vs. heavy labelling approach was employed to compare intact glycan yields from a popular spin-column based approach to the SCF approach. Recovery of intact N-glycans was significantly better with the SCF procedure (p < 0.05), but overall yield of O-glycans was similar or slightly diminished (p < 0.05 for tetrasaccharides or smaller). When the SCF procedure was employed upstream to hydrolysis, reduction and acetylation for glycan linkage analysis of pooled glycans from unfractionated blood plasma, analytical reproducibility was on par with that from previous spin column-based “glycan node” analysis results. When applied to blood plasma samples from stage III–IV breast cancer patients (n = 20) and age-matched controls (n = 20), the SCF procedure facilitated identification of three glycan nodes with significantly different distributions between the cases and controls (ROC c-statistics > 0.75; p < 0.01). In summary, the SCF permethylation procedure expedites and economizes both intact glycan analysis and linkage analysis of glycans from whole biospecimens. PMID:28635997
Mingo, Janire; Erramuzpe, Asier; Luna, Sandra; Aurtenetxe, Olaia; Amo, Laura; Diez, Ibai; Schepens, Jan T. G.; Hendriks, Wiljan J. A. J.; Cortés, Jesús M.; Pulido, Rafael
2016-01-01
Site-directed mutagenesis (SDM) is a powerful tool to create defined collections of protein variants for experimental and clinical purposes, but effectiveness is compromised when a large number of mutations is required. We present here a one-tube-only standardized SDM approach that generates comprehensive collections of amino acid substitution variants, including scanning- and single site-multiple mutations. The approach combines unified mutagenic primer design with the mixing of multiple distinct primer pairs and/or plasmid templates to increase the yield of a single inverse-PCR mutagenesis reaction. Also, a user-friendly program for automatic design of standardized primers for Ala-scanning mutagenesis is made available. Experimental results were compared with a modeling approach together with stochastic simulation data. For single site-multiple mutagenesis purposes and for simultaneous mutagenesis in different plasmid backgrounds, combination of primer sets and/or plasmid templates in a single reaction tube yielded the distinct mutations in a stochastic fashion. For scanning mutagenesis, we found that a combination of overlapping primer sets in a single PCR reaction allowed the yield of different individual mutations, although this yield did not necessarily follow a stochastic trend. Double mutants were generated when the overlap of primer pairs was below 60%. Our results illustrate that one-tube-only SDM effectively reduces the number of reactions required in large-scale mutagenesis strategies, facilitating the generation of comprehensive collections of protein variants suitable for functional analysis. PMID:27548698
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
Postlewait, Lauren M; Ethun, Cecilia G; McInnis, Mia R; Merchant, Nipun; Parikh, Alexander; Idrees, Kamran; Isom, Chelsea A; Hawkins, William; Fields, Ryan C; Strand, Matthew; Weber, Sharon M; Cho, Clifford S; Salem, Ahmed; Martin, Robert C G; Scoggins, Charles; Bentrem, David; Kim, Hong J; Carr, Jacquelyn; Ahmad, Syed; Abbott, Daniel; Wilson, Gregory C; Kooby, David A; Maithel, Shishir K
2018-01-01
Pancreatic mucinous cystic neoplasms (MCNs) are rare tumors typically of the distal pancreas that harbor malignant potential. Although resection is recommended, data are limited on optimal operative approaches to distal pancreatectomy for MCN. MCN resections (2000-2014; eight institutions) were included. Outcomes of minimally invasive and open MCN resections were compared. A total of 289 patients underwent distal pancreatectomy for MCN: 136(47%) minimally invasive and 153(53%) open. Minimally invasive procedures were associated with smaller MCN size (3.9 vs 6.8 cm; P = 0.001), lower operative blood loss (192 vs 392 mL; P = 0.001), and shorter hospital stay(5 vs 7 days; P = 0.001) compared with open. Despite higher American Society of Anesthesiologists class, hand-assisted (n = 46) had similar advantages as laparoscopic/robotic (n = 76). When comparing hand-assisted to open, although MCN size was slightly smaller (4.1 vs 6.8 cm; P = 0.001), specimen length, operative time, and nodal yield were identical. Similar to laparoscopic/robotic, hand-assisted had lower operative blood loss (161 vs 392 mL; P = 0.001) and shorter hospital stay (5 vs 7 days; P = 0.03) compared with open, without increased complications. Hand-assisted laparoscopic technique is a useful approach for MCN resection because specimen length, lymph node yield, operative time, and complication profiles are similar to open procedures, but it still offers the advantages of a minimally invasive approach. Hand-assisted laparoscopy should be considered as an alternative to open technique or as a successive step before converting from total laparoscopic to open distal pancreatectomy for MCN.
Plausible rice yield losses under future climate warming.
Zhao, Chuang; Piao, Shilong; Wang, Xuhui; Huang, Yao; Ciais, Philippe; Elliott, Joshua; Huang, Mengtian; Janssens, Ivan A; Li, Tao; Lian, Xu; Liu, Yongwen; Müller, Christoph; Peng, Shushi; Wang, Tao; Zeng, Zhenzhong; Peñuelas, Josep
2016-12-19
Rice is the staple food for more than 50% of the world's population 1-3 . Reliable prediction of changes in rice yield is thus central for maintaining global food security. This is an extraordinary challenge. Here, we compare the sensitivity of rice yield to temperature increase derived from field warming experiments and three modelling approaches: statistical models, local crop models and global gridded crop models. Field warming experiments produce a substantial rice yield loss under warming, with an average temperature sensitivity of -5.2 ± 1.4% K -1 . Local crop models give a similar sensitivity (-6.3 ± 0.4% K -1 ), but statistical and global gridded crop models both suggest less negative impacts of warming on yields (-0.8 ± 0.3% and -2.4 ± 3.7% K -1 , respectively). Using data from field warming experiments, we further propose a conditional probability approach to constrain the large range of global gridded crop model results for the future yield changes in response to warming by the end of the century (from -1.3% to -9.3% K -1 ). The constraint implies a more negative response to warming (-8.3 ± 1.4% K -1 ) and reduces the spread of the model ensemble by 33%. This yield reduction exceeds that estimated by the International Food Policy Research Institute assessment (-4.2 to -6.4% K -1 ) (ref. 4). Our study suggests that without CO 2 fertilization, effective adaptation and genetic improvement, severe rice yield losses are plausible under intensive climate warming scenarios.
Hickey, John M; Chiurugwi, Tinashe; Mackay, Ian; Powell, Wayne
2017-08-30
The rate of annual yield increases for major staple crops must more than double relative to current levels in order to feed a predicted global population of 9 billion by 2050. Controlled hybridization and selective breeding have been used for centuries to adapt plant and animal species for human use. However, achieving higher, sustainable rates of improvement in yields in various species will require renewed genetic interventions and dramatic improvement of agricultural practices. Genomic prediction of breeding values has the potential to improve selection, reduce costs and provide a platform that unifies breeding approaches, biological discovery, and tools and methods. Here we compare and contrast some animal and plant breeding approaches to make a case for bringing the two together through the application of genomic selection. We propose a strategy for the use of genomic selection as a unifying approach to deliver innovative 'step changes' in the rate of genetic gain at scale.
NASA Technical Reports Server (NTRS)
Beer, R.
1985-01-01
Small, low-cost comparator with 24-bit-precision yields ratio signal from pair of analog or digital input signals. Arithmetic logic chips (bit-slice) sample two 24-bit analog-to-digital converters approximately once every millisecond and accumulate them in two 24-bit registers. Approach readily modified to arbitrary precision.
Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests
NASA Astrophysics Data System (ADS)
Shumway, R. H.
2001-10-01
- The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.
Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests
NASA Astrophysics Data System (ADS)
Shumway, R. H.
The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.
Predictor symbology in computer-generated pictorial displays
NASA Technical Reports Server (NTRS)
Grunwald, A. J.
1981-01-01
The display under investigation, is a tunnel display for the four-dimensional commercial aircraft approach-to-landing under instrument flight rules. It is investigated whether more complex predictive information such as a three-dimensional perspective vehicle symbol, predicting the future vehicle position as well as future vehicle attitude angles, contributes to a better system response, and suitable predictor laws for the predictor motions, are formulated. Methods for utilizing the predictor symbol in controlling the forward velocity of the aircraft in four-dimensional approaches, are investigated. The simulator tests show, that the complex perspective vehicle symbol yields improved damping in the lateral response as compared to a flat two-dimensional predictor cross, but yields generally larger vertical deviations. Methods of using the predictor symbol in controlling the forward velocity of the vehicle are shown to be effective. The tunnel display with superimposed perspective vehicle symbol yields very satisfactory results and pilot acceptance in the lateral control but is found to be unsatisfactory in the vertical control, as a result of too large vertical path-angle deviations.
NASA Astrophysics Data System (ADS)
Stirnweis, Lisa; Marcolli, Claudia; Dommen, Josef; Barmet, Peter; Frege, Carla; Platt, Stephen M.; Bruns, Emily A.; Krapf, Manuel; Slowik, Jay G.; Wolf, Robert; Prévôt, Andre S. H.; Baltensperger, Urs; El-Haddad, Imad
2017-04-01
Secondary organic aerosol (SOA) yields from the photo-oxidation of α-pinene were investigated in smog chamber (SC) experiments at low (23-29 %) and high (60-69 %) relative humidity (RH), various NOx / VOC ratios (0.04-3.8) and with different aerosol seed chemical compositions (acidic to neutralized sulfate-containing or hydrophobic organic). A combination of a scanning mobility particle sizer and an Aerodyne high-resolution time-of-flight aerosol mass spectrometer was used to determine SOA mass concentration and chemical composition. We used a Monte Carlo approach to parameterize smog chamber SOA yields as a function of the condensed phase absorptive mass, which includes the sum of OA and the corresponding bound liquid water content. High RH increased SOA yields by up to 6 times (1.5-6.4) compared to low RH. The yields at low NOx / VOC ratios were in general higher compared to yields at high NOx / VOC ratios. This NOx dependence follows the same trend as seen in previous studies for α-pinene SOA. A novel approach of data evaluation using volatility distributions derived from experimental data served as the basis for thermodynamic phase partitioning calculations of model mixtures in this study. These calculations predict liquid-liquid phase separation into organic-rich and electrolyte phases. At low NOx conditions, equilibrium partitioning between the gas and liquid phases can explain most of the increase in SOA yields observed at high RH, when in addition to the α-pinene photo-oxidation products described in the literature, fragmentation products are added to the model mixtures. This increase is driven by both the increase in the absorptive mass and the solution non-ideality described by the compounds' activity coefficients. In contrast, at high NOx, equilibrium partitioning alone could not explain the strong increase in the yields with RH. This suggests that other processes, e.g. reactive uptake of semi-volatile species into the liquid phase, may occur and be enhanced at higher RH, especially for compounds formed under high NOx conditions, e.g. carbonyls.
Thorlund, Kristian; Thabane, Lehana; Mills, Edward J
2013-01-11
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
Wu, Liang; Chen, Xinping; Cui, Zhenling; Zhang, Weifeng; Zhang, Fusuo
2014-01-01
The overuse of Nitrogen (N) fertilizers on smallholder farms in rapidly developing countries has increased greenhouse gas (GHG) emissions and accelerated global N consumption over the past 20 years. In this study, a regional N management approach was developed based on the cost of the agricultural response to N application rates from 1,726 on-farm experiments to optimize N management across 12 agroecological subregions in the intensive Chinese smallholder maize belt. The grain yield and GHG emission intensity of this regional N management approach was investigated and compared to field-specific N management and farmers' practices. The regional N rate ranged from 150 to 219 kg N ha−1 for the 12 agroecological subregions. Grain yields and GHG emission intensities were consistent with this regional N management approach compared to field-specific N management, which indicated that this regional N rate was close to the economically optimal N application. This regional N management approach, if widely adopted in China, could reduce N fertilizer use by more than 1.4 MT per year, increase maize production by 31.9 MT annually, and reduce annual GHG emissions by 18.6 MT. This regional N management approach can minimize net N losses and reduce GHG emission intensity from over- and underapplications, and therefore can also be used as a reference point for regional agricultural extension employees where soil and/or plant N monitoring is lacking. PMID:24875747
Paul, Fiona; Otte, Jürgen; Schmitt, Imke; Dal Grande, Francesco
2018-06-05
The implementation of HTS (high-throughput sequencing) approaches is rapidly changing our understanding of the lichen symbiosis, by uncovering high bacterial and fungal diversity, which is often host-specific. Recently, HTS methods revealed the presence of multiple photobionts inside a single thallus in several lichen species. This differs from Sanger technology, which typically yields a single, unambiguous algal sequence per individual. Here we compared HTS and Sanger methods for estimating the diversity of green algal symbionts within lichen thalli using 240 lichen individuals belonging to two species of lichen-forming fungi. According to HTS data, Sanger technology consistently yielded the most abundant photobiont sequence in the sample. However, if the second most abundant photobiont exceeded 30% of the total HTS reads in a sample, Sanger sequencing generally failed. Our results suggest that most lichen individuals in the two analyzed species, Lasallia hispanica and L. pustulata, indeed contain a single, predominant green algal photobiont. We conclude that Sanger sequencing is a valid approach to detect the dominant photobionts in lichen individuals and populations. We discuss which research areas in lichen ecology and evolution will continue to benefit from Sanger sequencing, and which areas will profit from HTS approaches to assessing symbiont diversity.
Biotransformation of inorganic arsenic (iAs) involves methylation catalyzed by arsenic (+3 oxidation state) methyltransferase (As3mt) , yielding mono-, di-, and trimethylated arsenicals. A comparative genomic approach focused on Ciona intestinaJis, an invertebrate chordate, was u...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Wei; Balkovic, Juraj; van der Velde, M.
Crop models are increasingly used to assess impacts of climate change/variability and management practices on productivity and environmental performance of alternative cropping systems. Calibration is an important procedure to improve reliability of model simulations, especially for large area applications. However, global-scale crop model calibration has rarely been exercised due to limited data availability and expensive computing cost. Here we present a simple approach to calibrate Environmental Policy Integrated Climate (EPIC) model for a global implementation of rice. We identify four parameters (potential heat unit – PHU, planting density – PD, harvest index – HI, and biomass energy ratio – BER)more » and calibrate them regionally to capture the spatial pattern of reported rice yield in 2000. Model performance is assessed by comparing simulated outputs with independent FAO national data. The comparison demonstrates that the global calibration scheme performs satisfactorily in reproducing the spatial pattern of rice yield, particularly in main rice production areas. Spatial agreement increases substantially when more parameters are selected and calibrated, but with varying efficiencies. Among the parameters, PHU and HI exhibit the highest efficiencies in increasing the spatial agreement. Simulations with different calibration strategies generate a pronounced discrepancy of 5–35% in mean yields across latitude bands, and a small to moderate difference in estimated yield variability and yield changing trend for the period of 1981–2000. Present calibration has little effects in improving simulated yield variability and trends at both regional and global levels, suggesting further works are needed to reproduce temporal variability of reported yields. This study highlights the importance of crop models’ calibration, and presents the possibility of a transparent and consistent up scaling approach for global crop simulations given current availability of global databases of weather, soil, crop calendar, fertilizer and irrigation management information, and reported yield.« less
Cost Allocation Issues in Interlibrary Systems.
ERIC Educational Resources Information Center
Alexander, Ernest R.
1985-01-01
In comparing methods of allocating service transaction costs among member libraries of interlibrary systems, questions of how costs are to be estimated, and what cost elements are to be included are critical. Different approaches of estimation yield varying results. Actual distribution of units accounts for greatest variance in allocations. (CDD)
INTERPERSONAL RELATIONSHIPS--A REVIEW. UTAH STUDIES IN VOCATIONAL REHABILITATION.
ERIC Educational Resources Information Center
JORGENSEN, GARY Q.; RUSHLAU, PERRY J.
THIS MONOGRAPH IS A REVIEW OF SELECTED LITERATURE IN THE AREA OF INTERPERSONAL RELATIONSHIPS, WHICH HAS RELEVANCE TO THE CLIENT-COUNSELOR INTERACTION. THE STUDIES HAVE BEEN TREATED WITHIN THE FRAMEWORK OF MCGRATH'S DESCRIPTIVE MODEL FOR INTERPERSONAL RELATIONSHIPS. COMPARATIVE ANALYSIS OF THEORETICAL APPROACHES HAS YIELDED TWO LINES OF EVIDENCE…
Cannatelli, Mark D.; Ragauskas, Arthur J.
2016-07-06
The biocatalytic synthesis of phenothiazones and related compounds has been achieved in an aqueous system under mild conditions facilitated by laccase oxidation. It was found that by coupling 2-aminothiophenol directly with 1,4-quinones, the product yields could be significantly increased compared to generating the 1,4-quinones in situ from the corresponding hydroquinones via laccase oxidation. However, laccase still proved to be pivotal for achieving highest product yields by catalyzing the final oxidation step. Furthermore, a difference in reactivity of aromatic and aliphatic amines toward 1,4-naphthoquinone is observed. Furthermore, this study provides a sustainable approach to the synthesis of a biologically important classmore » of compounds.« less
Trading carbon for food: global comparison of carbon stocks vs. crop yields on agricultural land.
West, Paul C; Gibbs, Holly K; Monfreda, Chad; Wagner, John; Barford, Carol C; Carpenter, Stephen R; Foley, Jonathan A
2010-11-16
Expanding croplands to meet the needs of a growing population, changing diets, and biofuel production comes at the cost of reduced carbon stocks in natural vegetation and soils. Here, we present a spatially explicit global analysis of tradeoffs between carbon stocks and current crop yields. The difference among regions is striking. For example, for each unit of land cleared, the tropics lose nearly two times as much carbon (∼120 tons·ha(-1) vs. ∼63 tons·ha(-1)) and produce less than one-half the annual crop yield compared with temperate regions (1.71 tons·ha(-1)·y(-1) vs. 3.84 tons·ha(-1)·y(-1)). Therefore, newly cleared land in the tropics releases nearly 3 tons of carbon for every 1 ton of annual crop yield compared with a similar area cleared in the temperate zone. By factoring crop yield into the analysis, we specify the tradeoff between carbon stocks and crops for all areas where crops are currently grown and thereby, substantially enhance the spatial resolution relative to previous regional estimates. Particularly in the tropics, emphasis should be placed on increasing yields on existing croplands rather than clearing new lands. Our high-resolution approach can be used to determine the net effect of local land use decisions.
Vajzovic, Azra; Bura, Renata; Kohlmeier, Kevin; Doty, Sharon L
2012-10-01
A systematic study was conducted characterizing the effect of furfural, 5-hydroxymethylfurfural (5-HMF), and acetic acid concentration on the production of xylitol and ethanol by a novel endophytic yeast, Rhodotorula mucilaginosa strain PTD3. The influence of different inhibitor concentrations on the growth and fermentation abilities of PTD3 cultivated in synthetic nutrient media containing 30 g/l xylose or glucose were measured during liquid batch cultures. Concentrations of up to 5 g/l of furfural stimulated production of xylitol to 77 % of theoretical yield (10 % higher compared to the control) by PTD3. Xylitol yields produced by this yeast were not affected in the presence of 5-HMF at concentrations of up to 3 g/l. At higher concentrations of furfural and 5-HMF, xylitol and ethanol yields were negatively affected. The higher the concentration of acetic acid present in a media, the higher the ethanol yield approaching 99 % of theoretical yield (15 % higher compared to the control) was produced by the yeast. At all concentrations of acetic acid tested, xylitol yield was lowered. PTD3 was capable of metabolizing concentrations of 5, 15, and 5 g/l of furfural, 5-HMF, and acetic acid, respectively. This yeast would be a potent candidate for the bioconversion of lignocellulosic sugars to biochemicals given that in the presence of low concentrations of inhibitors, its xylitol and ethanol yields are stimulated, and it is capable of metabolizing pretreatment degradation products.
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Yaseen, Muhammad; Aziz, Muhammad Zahir; Jafar, Abdul Aleem; Naveed, Muhammad; Saleem, Muhammad
2016-01-01
A field experiment in collaboration with a private textile industry (Noor Fatima Fabrics Private (Ltd.), Faisalabad) was conducted to evaluate the effect of disposed water from bleaching unit, printing unit and end drain for improving growth and yield of wheat under saline sodic soil. Textile waste water along with canal water (control) was applied with and without liquid NPK fertilizer. The application of liquid NPK fertilizer with end drain waste water increased plant height, spike length, flag leaf length, root length, number of tillers (m(-2)), number of fertile tillers (m(-2)), 1000 grain weight, grain yield, straw yield and biological yield up to 21, 20, 20, 44, 17, 20, 14, 44, 40 and 41%, respectively compared to canal water (control). Similarly, the NPK uptake in grain was increased up to 15, 30 and 28%, respectively by liquid fertilizer treated end drain water as compare to canal water with liquid fertilizer. Moreover, concentration of different heavy metals particularly Cu, Cr, Pb and Cd was decreased in grains by application of waste water along with liquid NPK. The result may imply that waste water application along with liquid-NPK could be a novel approach for improving growth and yield of wheat in saline sodic soils.
Geospatial modeling of plant stable isotope ratios - the development of isoscapes
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Hurley, J. M.; Cerling, T. E.
2007-12-01
Large-scale spatial variation in stable isotope ratios can yield critical insights into the spatio-temporal dynamics of biogeochemical cycles, animal movements, and shifts in climate, as well as anthropogenic activities such as commerce, resource utilization, and forensic investigation. Interpreting these signals requires that we understand and model the variation. We report progress in our development of plant stable isotope ratio landscapes (isoscapes). Our approach utilizes a GIS, gridded datasets, a range of modeling approaches, and spatially distributed observations. We synthesize findings from four studies to illustrate the general utility of the approach, its ability to represent observed spatio-temporal variability in plant stable isotope ratios, and also outline some specific areas of uncertainty. We also address two basic, but critical questions central to our ability to model plant stable isotope ratios using this approach: 1. Do the continuous precipitation isotope ratio grids represent reasonable proxies for plant source water?, and 2. Do continuous climate grids (as is or modified) represent a reasonable proxy for the climate experienced by plants? Plant components modeled include leaf water, grape water (extracted from wine), bulk leaf material ( Cannabis sativa; marijuana), and seed oil ( Ricinus communis; castor bean). Our approaches to modeling the isotope ratios of these components varied from highly sophisticated process models to simple one-step fractionation models to regression approaches. The leaf water isosocapes were produced using steady-state models of enrichment and continuous grids of annual average precipitation isotope ratios and climate. These were compared to other modeling efforts, as well as a relatively sparse, but geographically distributed dataset from the literature. The latitudinal distributions and global averages compared favorably to other modeling efforts and the observational data compared well to model predictions. These results yield confidence in the precipitation isoscapes used to represent plant source water, the modified climate grids used to represent leaf climate, and the efficacy of this approach to modeling. Further work confirmed these observations. The seed oil isoscape was produced using a simple model of lipid fractionation driven with the precipitation grid, and compared well to widely distributed observations of castor bean oil, again suggesting that the precipitation grids were reasonable proxies for plant source water. The marijuana leaf δ2H observations distributed across the continental United States were regressed against the precipitation δ2H grids and yielded a strong relationship between them, again suggesting that plant source water was reasonably well represented by the precipitation grid. Finally, the wine water δ18O isoscape was developed from regressions that related precipitation isotope ratios and climate to observations from a single vintage. Favorable comparisons between year-specific wine water isoscapes and inter-annual variations in previous vintages yielded confidence in the climate grids. Clearly significant residual variability remains to be explained in all of these cases and uncertainties vary depending on the component modeled, but we conclude from this synthesis that isoscapes are capable of representing real spatial and temporal variability in plant stable isotope ratios.
Some general remarks on hyperplasticity modelling and its extension to partially saturated soils
NASA Astrophysics Data System (ADS)
Lei, Xiaoqin; Wong, Henry; Fabbri, Antonin; Bui, Tuan Anh; Limam, Ali
2016-06-01
The essential ideas and equations of classic plasticity and hyperplasticity are successively recalled and compared, in order to highlight their differences and complementarities. The former is based on the mathematical framework proposed by Hill (The mathematical theory of plasticity. Oxford University Press, Oxford, 1950), whereas the latter is founded on the orthogonality hypothesis of Ziegler (An introduction to thermomechanics. Elsevier, North-Holland, 1983). The main drawback of classic plasticity is the possibility of violating the second principle of thermodynamics, while the relative ease to conjecture the yield function in order to approach experimental results is its main advantage. By opposition, the a priori satisfaction of thermodynamic principles constitutes the chief advantage of hyperplasticity theory. Noteworthy is also the fact that this latter approach allows a finer energy partition; in particular, the existence of frozen energy emerges as a natural consequence from its theoretical formulation. On the other hand, the relative difficulty to conjecture an efficient dissipation function to produce accurate predictions is its main drawback. The two theories are thus better viewed as two complementary approaches. Following this comparative study, a methodology to extend the hyperplasticity approach initially developed for dry or saturated materials to the case of partially saturated materials, accounting for interface energies and suction effects, is developed. A particular example based on the yield function of modified Cam-Clay model is then presented. It is shown that the approach developed leads to a model consistent with other existing works.
Linking multimetric and multivariate approaches to assess the ecological condition of streams.
Collier, Kevin J
2009-10-01
Few attempts have been made to combine multimetric and multivariate analyses for bioassessment despite recognition that an integrated method could yield powerful tools for bioassessment. An approach is described that integrates eight macroinvertebrate community metrics into a Principal Components Analysis to develop a Multivariate Condition Score (MCS) from a calibration dataset of 511 samples. The MCS is compared to an Index of Biotic Integrity (IBI) derived using the same metrics based on the ratio to the reference site mean. Both approaches were highly correlated although the MCS appeared to offer greater potential for discriminating a wider range of impaired conditions. Both the MCS and IBI displayed low temporal variability within reference sites, and were able to distinguish between reference conditions and low levels of catchment modification and local habitat degradation, although neither discriminated among three levels of low impact. Pseudosamples developed to test the response of the metric aggregation approaches to organic enrichment, urban, mining, pastoral and logging stressor scenarios ranked pressures in the same order, but the MCS provided a lower score for the urban scenario and a higher score for the pastoral scenario. The MCS was calculated for an independent test dataset of urban and reference sites, and yielded similar results to the IBI. Although both methods performed comparably, the MCS approach may have some advantages because it removes the subjectivity of assigning thresholds for scoring biological condition, and it appears to discriminate a wider range of degraded conditions.
Trade-offs among ecosystem services in a typical Karst watershed, SW China.
Tian, Yichao; Wang, Shijie; Bai, Xiaoyong; Luo, Guangjie; Xu, Yan
2016-10-01
Nowadays, most research results on ecosystem services in Karst areas are limited to a single function of an ecosystem service. Few scholars conduct a comparative study on the mutual relationships among ecosystem services, let alone reveal the trade-off and synergic relationships in typical Karst watershed. This research aims to understand and quantitatively evaluate the relationships among ecosystem services in a typical Karst watershed, broaden the depth and width of trade-off and synergic relationships in ecosystem services and explore a set of technical processes involved in these relationships. With the Shibantang Karst watershed in China as the research site, we explore the trade-off and synergic relationships of net primary productivity (NPP), water yield, and sediment yield by coupling Soil and Water Assessment Tool (SWAT) and Carnegie-Ames-Stanford Approach (CASA), and simulating and evaluating these three ecosystem services between 2000 and 2010. Results of this study are as follows. (1) The annual average water yield decreased from 528mm in 2000 to 513mm in 2010, decreasing by 2.84%. (2) The annual average sediment yield decreased from 26.15t/ha in 2000 to 23.81t/ha in 2010, with an average annual reduction of 0.23t/ha. (3) The annual average NPP increased from 739.38gCm(-2)a(-1) in 2000 to 746.25gCm(-2)a(-1) in 2010, increasing by 6.87gCm(-2)a(-1) . (4) Water yield and sediment yield are in a synergic relationship. The increase of water yield can accumulate the soil erosion amount. NPP is in a trade-off relationship with water yield and sediment yield. The improvement of NPP is good for decreasing water yield and soil erosion amount and increasing soil conservation amount. This study provides policy makers and planners an approach to develop an integrated model, as well as design mapping and monitoring protocols for land use change and ecosystem service assessments. Copyright © 2016 Elsevier B.V. All rights reserved.
Kujur, Alice; Saxena, Maneesha S; Bajaj, Deepak; Laxmi; Parida, Swarup K
2013-12-01
The enormous population growth, climate change and global warming are now considered major threats to agriculture and world's food security. To improve the productivity and sustainability of agriculture, the development of highyielding and durable abiotic and biotic stress-tolerant cultivars and/climate resilient crops is essential. Henceforth, understanding the molecular mechanism and dissection of complex quantitative yield and stress tolerance traits is the prime objective in current agricultural biotechnology research. In recent years, tremendous progress has been made in plant genomics and molecular breeding research pertaining to conventional and next-generation whole genome, transcriptome and epigenome sequencing efforts, generation of huge genomic, transcriptomic and epigenomic resources and development of modern genomics-assisted breeding approaches in diverse crop genotypes with contrasting yield and abiotic stress tolerance traits. Unfortunately, the detailed molecular mechanism and gene regulatory networks controlling such complex quantitative traits is not yet well understood in crop plants. Therefore, we propose an integrated strategies involving available enormous and diverse traditional and modern -omics (structural, functional, comparative and epigenomics) approaches/resources and genomics-assisted breeding methods which agricultural biotechnologist can adopt/utilize to dissect and decode the molecular and gene regulatory networks involved in the complex quantitative yield and stress tolerance traits in crop plants. This would provide clues and much needed inputs for rapid selection of novel functionally relevant molecular tags regulating such complex traits to expedite traditional and modern marker-assisted genetic enhancement studies in target crop species for developing high-yielding stress-tolerant varieties.
Wei, Mi; Tong, Yao; Wang, Hongbo; Wang, Lihua; Yu, Longjiang
2016-04-01
Development of efficient pretreatment methods which can disrupt the peripheral lignocellulose and even the parenchyma cells is of great importance for production of diosgenin from turmeric rhizomes. It was found that low pressure steam expansion pretreatment (LSEP) could improve the diosgenin yield by more than 40% compared with the case without pretreatment, while simultaneously increasing the production of fermentable sugar by 27.37%. Furthermore, little inhibitory compounds were produced in LSEP process which was extremely favorable for the subsequent biotransformation of fermentable sugar to other valuable products such as ethanol. Preliminary study showed that the ethanol yield when using the fermentable sugar as carbon source was comparable to that using glucose. The liquid residue of LSEP treated turmeric tuber after diosgenin production can be utilized as a quality fermentable carbon source. Therefore, LSEP has great potential in industrial application in diosgenin clean production and comprehensive utilization of turmeric tuber. Copyright © 2016 Elsevier Ltd. All rights reserved.
Can plastic mulching replace irrigation in dryland agriculture?
NASA Astrophysics Data System (ADS)
Wang, L.; Daryanto, S.; Jacinthe, P. A.
2017-12-01
Increasing water use efficiency (WUE) is a key strategy to maintaining crops yield without over-exploiting the scarce water resource. Plastic mulching technology for wheat and maize has been commonly used in China, but their effect on yield, soil moisture, evapotranspiration (ET), and WUE has not been compared with traditional irrigation method. Using a meta-analysis approach, we quantitatively examined the efficacy of plastic mulching in comparison with traditional irrigation in dryland agriculture. Our results showed that plastic mulching technique resulted in yield increase comparable to irrigated crops but used 24% less water. By covering the ridges with plastic and channeling rainwater into a very narrow planting zone (furrow), plastic mulching increased WUE and available soil moisture. Higher WUE in plastic-mulched croplands was likely a result of greater proportion of available water being used for transpiration than evaporation. If problems related to production costs and residual plastic pollution could be managed, plastic mulching technology would become a promising strategy for dryland farming in other regions.
Polariton-Assisted Singlet Fission in Acene Aggregates.
Martínez-Martínez, Luis A; Du, Matthew; F Ribeiro, Raphael; Kéna-Cohen, Stéphane; Yuen-Zhou, Joel
2018-04-19
Singlet fission is an important candidate to increase energy conversion efficiency in organic photovoltaics by providing a pathway to increase the quantum yield of excitons per photon absorbed in select materials. We investigate the dependence of exciton quantum yield for acenes in the strong light-matter interaction (polariton) regime, where the materials are embedded in optical microcavities. Starting from an open-quantum-systems approach, we build a kinetic model for time-evolution of species of interest in the presence of singlet quenchers and show that polaritons can decrease or increase exciton quantum yields compared to the cavity-free case. In particular, we find that hexacene, under the conditions of our model, can feature a higher yield than cavity-free pentacene when assisted by polaritonic effects. Similarly, we show that pentacene yield can be increased when assisted by polariton states. Finally, we address how various relaxation processes between bright and dark states in lossy microcavities affect polariton photochemistry. Our results also provide insights on how to choose microcavities to enhance similarly related chemical processes.
Cow genotyping strategies for genomic selection in a small dairy cattle population.
Jenko, J; Wiggans, G R; Cooper, T A; Eaglen, S A E; Luff, W G de L; Bichard, M; Pong-Wong, R; Woolliams, J A
2017-01-01
This study compares how different cow genotyping strategies increase the accuracy of genomic estimated breeding values (EBV) in dairy cattle breeds with low numbers. In these breeds, few sires have progeny records, and genotyping cows can improve the accuracy of genomic EBV. The Guernsey breed is a small dairy cattle breed with approximately 14,000 recorded individuals worldwide. Predictions of phenotypes of milk yield, fat yield, protein yield, and calving interval were made for Guernsey cows from England and Guernsey Island using genomic EBV, with training sets including 197 de-regressed proofs of genotyped bulls, with cows selected from among 1,440 genotyped cows using different genotyping strategies. Accuracies of predictions were tested using 10-fold cross-validation among the cows. Genomic EBV were predicted using 4 different methods: (1) pedigree BLUP, (2) genomic BLUP using only bulls, (3) univariate genomic BLUP using bulls and cows, and (4) bivariate genomic BLUP. Genotyping cows with phenotypes and using their data for the prediction of single nucleotide polymorphism effects increased the correlation between genomic EBV and phenotypes compared with using only bulls by 0.163±0.022 for milk yield, 0.111±0.021 for fat yield, and 0.113±0.018 for protein yield; a decrease of 0.014±0.010 for calving interval from a low base was the only exception. Genetic correlation between phenotypes from bulls and cows were approximately 0.6 for all yield traits and significantly different from 1. Only a very small change occurred in correlation between genomic EBV and phenotypes when using the bivariate model. It was always better to genotype all the cows, but when only half of the cows were genotyped, a divergent selection strategy was better compared with the random or directional selection approach. Divergent selection of 30% of the cows remained superior for the yield traits in 8 of 10 folds. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Wigman, J T W; van Os, J; Borsboom, D; Wardenaar, K J; Epskamp, S; Klippel, A; Viechtbauer, W; Myin-Germeys, I; Wichers, M
2015-08-01
It has been suggested that the structure of psychopathology is best described as a complex network of components that interact in dynamic ways. The goal of the present paper was to examine the concept of psychopathology from a network perspective, combining complementary top-down and bottom-up approaches using momentary assessment techniques. A pooled Experience Sampling Method (ESM) dataset of three groups (individuals with a diagnosis of depression, psychotic disorder or no diagnosis) was used (pooled N = 599). The top-down approach explored the network structure of mental states across different diagnostic categories. For this purpose, networks of five momentary mental states ('cheerful', 'content', 'down', 'insecure' and 'suspicious') were compared between the three groups. The complementary bottom-up approach used principal component analysis to explore whether empirically derived network structures yield meaningful higher order clusters. Individuals with a clinical diagnosis had more strongly connected moment-to-moment network structures, especially the depressed group. This group also showed more interconnections specifically between positive and negative mental states than the psychotic group. In the bottom-up approach, all possible connections between mental states were clustered into seven main components that together captured the main characteristics of the network dynamics. Our combination of (i) comparing network structure of mental states across three diagnostically different groups and (ii) searching for trans-diagnostic network components across all pooled individuals showed that these two approaches yield different, complementary perspectives in the field of psychopathology. The network paradigm therefore may be useful to map transdiagnostic processes.
Accounting for range uncertainties in the optimization of intensity modulated proton therapy.
Unkelbach, Jan; Chan, Timothy C Y; Bortfeld, Thomas
2007-05-21
Treatment plans optimized for intensity modulated proton therapy (IMPT) may be sensitive to range variations. The dose distribution may deteriorate substantially when the actual range of a pencil beam does not match the assumed range. We present two treatment planning concepts for IMPT which incorporate range uncertainties into the optimization. The first method is a probabilistic approach. The range of a pencil beam is assumed to be a random variable, which makes the delivered dose and the value of the objective function a random variable too. We then propose to optimize the expectation value of the objective function. The second approach is a robust formulation that applies methods developed in the field of robust linear programming. This approach optimizes the worst case dose distribution that may occur, assuming that the ranges of the pencil beams may vary within some interval. Both methods yield treatment plans that are considerably less sensitive to range variations compared to conventional treatment plans optimized without accounting for range uncertainties. In addition, both approaches--although conceptually different--yield very similar results on a qualitative level.
Zhou, Bo; Lin, Jian Zhong; Peng, Dan; Yang, Yuan Zhu; Guo, Ming; Tang, Dong Ying; Tan, Xiaofeng; Liu, Xuan Ming
2017-01-01
In many plants, architecture and grain yield are affected by both the environment and genetics. In rice, the tiller is a vital factor impacting plant architecture and regulated by many genes. In this study, we cloned a novel DHHC-type zinc finger protein gene Os02g0819100 and its alternative splice variant OsDHHC1 from the cDNA of rice (Oryza sativa L.), which regulate plant architecture by altering the tiller in rice. The tillers increased by about 40% when this type of DHHC-type zinc finger protein gene was over-expressed in Zhong Hua 11 (ZH11) rice plants. Moreover, the grain yield of transgenic rice increased approximately by 10% compared with wild-type ZH11. These findings provide an important genetic engineering approach for increasing rice yields. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Performance of Vegetation Indices for Wheat Yield Forecasting for Punjab, Pakistan
NASA Astrophysics Data System (ADS)
Dempewolf, J.; Becker-Reshef, I.; Adusei, B.; Barker, B.
2013-12-01
Forecasting wheat yield in major producer countries early in the growing season allows better planning for harvest deficits and surplus with implications for food security, world market transactions, sustaining adequate grain stocks, policy making and other matters. Remote sensing imagery is well suited for yield forecasting over large areas. The Normalized Difference Vegetation Index (NDVI) has been the most-used spectral index derived from remote sensing imagery for assessing crop condition of major crops and forecasting crop yield. Many authors have found that the highest correlation between NDVI and yield of wheat crops occurs at the height of the growing season when NDVI values and photosynthetic activity of the wheat plants are at their relative maximum. At the same time NDVI saturates in very dense and vigorous (healthy, green) canopies such as wheat fields during the seasonal peak and shows significantly reduced sensitivity to further increases in photosynthetic activity. In this study we compare the performance of different vegetation indices derived from space-borne red and near-infrared spectral reflectance measurements for wheat yield forecasting in the Punjab Province, Pakistan. Areas covered by wheat crop each year were determined using a time series of MODIS 8-day composites at 250 m resolution converted to temporal metrics and classified using a bagged decision tree approach, driven by classified multi-temporal Landsat scenes. Within the wheat areas we analyze and compare wheat yield forecasts derived from three different satellite-based vegetation indices at the peak of the growing season. We regressed in turn NDVI, Wide Dynamic Range Vegetation Index (WDRVI) and the Vegetation Condition Index (VCI) from the four years preceding the wheat growing season 2011/12 against reported yield values and applied the regression equations to forecast wheat yield for the 2011/12 season per district for each of 36 Punjab districts. Yield forecasts overall corresponded well with reported values. NDVI-based forecasts showed high correlations of r squared = 0.881 and RMSE 11%. The VCI performed similarly well with r squared = 0.886 and RMSE 11%. WDRVI performed better than either of the other indices with r squared = 0.909 and RMSE 10%, probably due to the increased sensitivity of the index at high values. Wheat yields in Pakistan show on average a slow but steady annual increase but overall are comparatively stable due to the fact that the majority of fields are irrigated. The next steps in this study will be to compare NDVI- with WDRVI-based yield forecasts in other environments dominated by rain-fed agriculture, such as Ukraine, Australia and the United States.
Cost Allocation of Multiagency Water Resource Projects: Game Theoretic Approaches and Case Study
NASA Astrophysics Data System (ADS)
Lejano, Raul P.; Davos, Climis A.
1995-05-01
Water resource projects are often jointly carried out by a number of communities and agencies. Participation in a joint project depends on how costs are allocated among the participants and how cost shares compare with the cost of independent projects. Cooperative N-person game theory offers approaches which yield cost allocations that satisfy rationality conditions favoring participation. A new solution concept, the normalized nucleolus, is discussed and applied to a water reuse project in southern California. Results obtained with the normalized nucleolus are compared with those derived with more traditional solution concepts, namely, the nucleolus and the Shapley value.
N* production from pp and p-barp collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Jiajun; Cao Xu; Theoretical Physics Center for Science Facilities, CAS, Beijing 100049
2011-10-21
With an effective Lagrangian approach, we give a full analysis on the NN{yields}NN{pi}{pi} and pp{yields}pn{pi}{sup +} reactions for proton beam energy from 1 to 1.5 GeV. The results are very consistent with the experiment data from CELSIUS, KEK, COSY, and so on. Based on these results, we consider the N-barN{yields}N-barN{pi}{pi} and p-barp{yields}p-barn{pi}{sup +} for proton beam energy up to 4 GeV. Compare to the pp collisions, there are many benefits to study N* resonances in these two reactions. And for the high proton beam energy up to 15 GeV, we consider some new resonances with hidden charm which are definitelymore » beyond three constituent quarks model in the p-barp{yields}p-barpJ/{psi} and p-barp{yields}p-barp{eta}{sub c}, where there are very nice places to find these new N{sub cc}-bar*. The predicted results about p-barp collisions can be looked for at the forthcoming PANDA/FAIR experiments.« less
Morton, Michael J; Williams, David L; Hjorth, Heather B; Smith, Jennifer H
2010-04-01
This paper explores using the intensity of the stain on the end of the filter ("filter color") as a vehicle for estimating cigarette tar yield, both by instrument reading of the filter color and by visual comparison to a template. The correlation of machine-measured tar yield to filter color measured with a colorimeter was reasonably strong and was relatively unaffected by different puff volumes or different tobacco moistures. However, the correlation of filter color to machine-measured nicotine yield was affected by the moisture content of the cigarette. Filter color, as measured by a colorimeter, was generally comparable to filter extraction of either nicotine or solanesol in its correlation to machine-smoked tar yields. It was found that the color of the tar stain changes over time. Panelists could generally correctly order the filters from machine-smoked cigarettes by tar yield using the intensity of the tar stain. However, there was considerable variation in the panelist-to-panelist tar yield estimates. The wide person-to-person variation in tar yield estimates, and other factors discussed in the text could severely limit the usefulness and practicality of this approach for visually estimating the tar yield of machine-smoked cigarettes. Copyright 2009 Elsevier Inc. All rights reserved.
Integrating predictive information into an agro-economic model to guide agricultural management
NASA Astrophysics Data System (ADS)
Zhang, Y.; Block, P.
2016-12-01
Skillful season-ahead climate predictions linked with responsive agricultural planning and management have the potential to reduce losses, if adopted by farmers, particularly for rainfed-dominated agriculture such as in Ethiopia. Precipitation predictions during the growing season in major agricultural regions of Ethiopia are used to generate predicted climate yield factors, which reflect the influence of precipitation amounts on crop yields and serve as inputs into an agro-economic model. The adapted model, originally developed by the International Food Policy Research Institute, produces outputs of economic indices (GDP, poverty rates, etc.) at zonal and national levels. Forecast-based approaches, in which farmers' actions are in response to forecasted conditions, are compared with no-forecast approaches in which farmers follow business as usual practices, expecting "average" climate conditions. The effects of farmer adoption rates, including the potential for reduced uptake due to poor predictions, and increasing forecast lead-time on economic outputs are also explored. Preliminary results indicate superior gains under forecast-based approaches.
An approach to DNI transients characterization for system evaluation
NASA Astrophysics Data System (ADS)
Feldhoff, Jan Fabian; Hirsch, Tobias
2017-06-01
The direct normal irradiance (DNI) is of utmost importance for concentrated solar power (CSP) plants. For annual yield prediction, a steady-state heat balance is made for each hour of the year or for a smaller time period such as 15 min with the corresponding average DNI value. However, short term DNI variations by clouds are ignored by this approach. In consequence, there is no information on the transient behavior of the plant and the question remains how the plant is influenced by the DNI disturbance. The paper intends to start a discussion on DNI characterization and its application to CSP. An approach to categorize the DNI behavior from a transient system point of view is presented by using purpose-/system-specific filters. Resulting DNI disturbance classes are proposed to directly compare different sites and technologies. They can be useful for better yield analysis and better commercial project selection in the future. An example on a once-through direct steam generation plant is provided.
The MICRO-BOSS scheduling system: Current status and future efforts
NASA Technical Reports Server (NTRS)
Sadeh, Norman M.
1993-01-01
In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule, and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory.
Symmetry Breaking and the B3LYP Functional
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Hudgins, Douglas M.; Allamandola, Louis J.; Arnold, James O. (Technical Monitor)
1999-01-01
The infrared spectra of six molecules, each of which contains a five-membered ring, and their cations are determined using density functional theory (DFT); both the B3LYP and BP86 functionals are used. The computed results are compared with the experimental spectra. For the neutral molecules, both methods are in good agreement with experiment. Even the Hartree-Fock (HF) approach is qualitatively correct for the neutrals. For the cations, the HF approach fails, as found for other organic ring systems. The B3LYP and BP86 approaches are in good mutual agreement for five of the six cation spectra, and in good agreement with experiment for four of the five cations where the experimental spectra are available. It is only for the fluoranthene cation, where the BP86 and B3LYP functionals yield different results; the BP86 yields the expected C2v symmetry, while the B3LYP approach breaks symmetry. The experimental spectra supports the BP86 spectra over the B3LYP, but the quality of the experimental spectra does not allow a critical evaluation of the accuracy of the BP86 approach for this difficult system.
Comparing Paper and Tablet Modes of Retrospective Activity Space Data Collection.
Yabiku, Scott T; Glick, Jennifer E; Wentz, Elizabeth A; Ghimire, Dirgha; Zhao, Qunshan
2017-01-01
Individual actions are both constrained and facilitated by the social context in which individuals are embedded. But research to test specific hypotheses about the role of space on human behaviors and well-being is limited by the difficulty of collecting accurate and personally relevant social context data. We report on a project in Chitwan, Nepal, that directly addresses challenges to collect accurate activity space data. We test if a computer assisted interviewing (CAI) tablet-based approach to collecting activity space data was more accurate than a paper map-based approach; we also examine which subgroups of respondents provided more accurate data with the tablet mode compared to paper. Results show that the tablet approach yielded more accurate data when comparing respondent-indicated locations to the known locations as verified by on-the-ground staff. In addition, the accuracy of the data provided by older and less healthy respondents benefited more from the tablet mode.
Comparing Paper and Tablet Modes of Retrospective Activity Space Data Collection*
Yabiku, Scott T.; Glick, Jennifer E.; Wentz, Elizabeth A.; Ghimire, Dirgha; Zhao, Qunshan
2018-01-01
Individual actions are both constrained and facilitated by the social context in which individuals are embedded. But research to test specific hypotheses about the role of space on human behaviors and well-being is limited by the difficulty of collecting accurate and personally relevant social context data. We report on a project in Chitwan, Nepal, that directly addresses challenges to collect accurate activity space data. We test if a computer assisted interviewing (CAI) tablet-based approach to collecting activity space data was more accurate than a paper map-based approach; we also examine which subgroups of respondents provided more accurate data with the tablet mode compared to paper. Results show that the tablet approach yielded more accurate data when comparing respondent-indicated locations to the known locations as verified by on-the-ground staff. In addition, the accuracy of the data provided by older and less healthy respondents benefited more from the tablet mode. PMID:29623133
ERIC Educational Resources Information Center
Clevenger, Theresa M.; Graff, Richard B.
2005-01-01
Tangible and pictorial paired-stimulus (PPS) preference assessments were compared for 6 individuals with developmental disabilities. During tangible and PPS assessments, two edible items or photographs were presented on each trial, respectively, and approach responses were recorded. Both assessments yielded similar preference hierarchies for 3…
Seeking an Online Social Media Radar
ERIC Educational Resources Information Center
ter Veen, James
2014-01-01
Purpose: The purpose of this paper is to explore how the application of Systems Engineering tools and techniques can be applied to rapidly process and analyze the vast amounts of data present in social media in order to yield practical knowledge for Command and Control (C2) systems. Design/methodology/approach: Based upon comparative analysis of…
Jacob, Samuel; Banerjee, Rintu
2016-08-01
A novel approach to overcome the acidification problem has been attempted in the present study by codigesting industrial potato waste (PW) with Pistia stratiotes (PS, an aquatic weed). The effectiveness of codigestion of the weed and PW was tested in an equal (1:1) proportion by weight with substrate concentration of 5g total solid (TS)/L (2.5gPW+2.5gPS) which resulted in enhancement of methane yield by 76.45% as compared to monodigestion of PW with a positive synergistic effect. Optimization of process parameters was conducted using central composite design (CCD) based response surface methodology (RSM) and artificial neural network (ANN) coupled genetic algorithm (GA) model. Upon comparison of these two optimization techniques, ANN-GA model obtained through feed forward back propagation methodology was found to be efficient and yielded 447.4±21.43LCH4/kgVSfed (0.279gCH4/kgCODvs) which is 6% higher as compared to the CCD-RSM based approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
Thermochemical tests on resins: Char resistance of selected phenolic cured epoxides
NASA Technical Reports Server (NTRS)
Keck, F. L.
1982-01-01
Curing epoxy resins with novalac phenolic resins is a feasible approach for increasing intact char of the resin system. Char yields above 40% at 700 C were achieved with epoxy novalac (DEN 438)/novalac phenolic (BRWE 5833) resin systems with or without catalyst such as ethyl tri-phenyl phosphonium iodide. These char yields are comparable to commercially used epoxy resin systems like MY-720/DDS/BF3. Stable prepregs are easily made from a solvent solution of the epoxy/phenolic system and this provides a feasible process for fabrication of same into commercial laminates.
Spectrum sensitivity, energy yield, and revenue prediction of PV and CPV modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinsey, Geoffrey S., E-mail: Geoffrey.kinsey@ee.doe.gov
2015-09-28
Impact on module performance of spectral irradiance variation has been determined for III-V multijunctions compared against the four most common flat-plate module types (cadmium telluride, multicrystalline silicon, copper indium gallium selenide, and monocrystalline silicon. Hour-by-hour representative spectra were generated using atmospheric variables for Albuquerque, New Mexico, USA. Convolution with published values for external quantum efficiency gave the predicted current output. When combined with specifications of commercial PV modules, energy yield and revenue were predicted. This approach provides a means for optimizing PV module design based on various site-specific temporal variables.
Spectrum sensitivity, energy yield, and revenue prediction of PV and CPV modules
NASA Astrophysics Data System (ADS)
Kinsey, Geoffrey S.
2015-09-01
Impact on module performance of spectral irradiance variation has been determined for III-V multijunctions compared against the four most common flat-plate module types (cadmium telluride, multicrystalline silicon, copper indium gallium selenide, and monocrystalline silicon. Hour-by-hour representative spectra were generated using atmospheric variables for Albuquerque, New Mexico, USA. Convolution with published values for external quantum efficiency gave the predicted current output. When combined with specifications of commercial PV modules, energy yield and revenue were predicted. This approach provides a means for optimizing PV module design based on various site-specific temporal variables.
GRACE time-variable gravity field recovery using an improved energy balance approach
NASA Astrophysics Data System (ADS)
Shang, Kun; Guo, Junyi; Shum, C. K.; Dai, Chunli; Luo, Jia
2015-12-01
A new approach based on energy conservation principle for satellite gravimetry mission has been developed and yields more accurate estimation of in situ geopotential difference observables using K-band ranging (KBR) measurements from the Gravity Recovery and Climate Experiment (GRACE) twin-satellite mission. This new approach preserves more gravity information sensed by KBR range-rate measurements and reduces orbit error as compared to previous energy balance methods. Results from analysis of 11 yr of GRACE data indicated that the resulting geopotential difference estimates agree well with predicted values from official Level 2 solutions: with much higher correlation at 0.9, as compared to 0.5-0.8 reported by previous published energy balance studies. We demonstrate that our approach produced a comparable time-variable gravity solution with the Level 2 solutions. The regional GRACE temporal gravity solutions over Greenland reveals that a substantially higher temporal resolution is achievable at 10-d sampling as compared to the official monthly solutions, but without the compromise of spatial resolution, nor the need to use regularization or post-processing.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping
2017-03-01
A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp, E-mail: philipp.marquetand@univie.ac.at
Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy ismore » constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.« less
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-12-01
Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.
Wong, Yick Ching; Teh, Huey Fang; Mebus, Katharina; Ooi, Tony Eng Keong; Kwong, Qi Bin; Koo, Ka Loo; Ong, Chuang Kee; Mayes, Sean; Chew, Fook Tim; Appleton, David R; Kulaveerasingam, Harikrishna
2017-06-21
The oil yield trait of oil palm is expected to involve multiple genes, environmental influences and interactions. Many of the underlying mechanisms that contribute to oil yield are still poorly understood. In this study, we used a microarray approach to study the gene expression profiles of mesocarp tissue at different developmental stages, comparing genetically related high- and low- oil yielding palms to identify genes that contributed to the higher oil-yielding palm and might contribute to the wider genetic improvement of oil palm breeding populations. A total of 3412 (2001 annotated) gene candidates were found to be significantly differentially expressed between high- and low-yielding palms at at least one of the different stages of mesocarp development evaluated. Gene Ontologies (GO) enrichment analysis identified 28 significantly enriched GO terms, including regulation of transcription, fatty acid biosynthesis and metabolic processes. These differentially expressed genes comprise several transcription factors, such as, bHLH, Dof zinc finger proteins and MADS box proteins. Several genes involved in glycolysis, TCA, and fatty acid biosynthesis pathways were also found up-regulated in high-yielding oil palm, among them; pyruvate dehydrogenase E1 component Subunit Beta (PDH), ATP-citrate lyase, β- ketoacyl-ACP synthases I (KAS I), β- ketoacyl-ACP synthases III (KAS III) and ketoacyl-ACP reductase (KAR). Sucrose metabolism-related genes such as Invertase, Sucrose Synthase 2 and Sucrose Phosphatase 2 were found to be down-regulated in high-yielding oil palms, compared to the lower yield palms. Our findings indicate that a higher carbon flux (channeled through down-regulation of the Sucrose Synthase 2 pathway) was being utilized by up-regulated genes involved in glycolysis, TCA and fatty acid biosynthesis leading to enhanced oil production in the high-yielding oil palm. These findings are an important stepping stone to understand the processes that lead to production of high-yielding oil palms and have implications for breeding to maximize oil production.
Mtibaa, Slim; Hotta, Norifumi; Irie, Mitsuteru
2018-03-01
Soil erosion can be reduced through the strategic selection and placement of best management practices (BMPs) in critical source areas (CSAs). In the present study, the Soil Water Assessment Tool (SWAT) model was used to identify CSAs and investigate the effectiveness of different BMPs in reducing sediment yield in the Joumine watershed, an agricultural river catchment located in northern Tunisia. A cost-benefit analysis (CBA) was used to evaluate the cost-effectiveness of different BMP scenarios. The objective of the present study was to determine the most cost-effective management scenario for controlling sediment yield. The model performance for the simulation of streamflow and sediment yield at the outlet of the Joumine watershed was good and satisfactory, respectively. The model indicated that most of the sediment was originated from the cultivated upland area. About 34% of the catchment area consisted of CSAs that were affected by high to very high soil erosion risk (sediment yield >10t/ha/year). Contour ridges were found to be the most effective individual BMP in terms of sediment yield reduction. At the watershed level, implementing contour ridges in the CSAs reduced sediment yield by 59%. Combinations of BMP scenarios were more cost-effective than the contour ridges alone. Combining buffer strips (5-m width) with other BMPs depending on land slope (> 20% slope: conversion to olive orchards; 10-20% slope: contour ridges; 5-10% slope: grass strip cropping) was the most effective approach in terms of sediment yield reduction and economic benefits. This approach reduced sediment yield by 61.84% with a benefit/cost ratio of 1.61. Compared with the cost of dredging, BMPs were more cost-effective for reducing sediment loads to the Joumine reservoir, located downstream of the catchment. Our findings may contribute to ensure the sustainability of future conservation programs in Tunisian regions. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Peckerar, Martin C.; Marrian, Christie R.
1995-05-01
Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.
Time series regression-based pairs trading in the Korean equities market
NASA Astrophysics Data System (ADS)
Kim, Saejoon; Heo, Jun
2017-07-01
Pairs trading is an instance of statistical arbitrage that relies on heavy quantitative data analysis to profit by capitalising low-risk trading opportunities provided by anomalies of related assets. A key element in pairs trading is the rule by which open and close trading triggers are defined. This paper investigates the use of time series regression to define the rule which has previously been identified with fixed threshold-based approaches. Empirical results indicate that our approach may yield significantly increased excess returns compared to ones obtained by previous approaches on large capitalisation stocks in the Korean equities market.
2013-01-01
Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298
Augmenting Qualitative Text Analysis with Natural Language Processing: Methodological Study.
Guetterman, Timothy C; Chang, Tammy; DeJonckheere, Melissa; Basu, Tanmay; Scruggs, Elizabeth; Vydiswaran, V G Vinod
2018-06-29
Qualitative research methods are increasingly being used across disciplines because of their ability to help investigators understand the perspectives of participants in their own words. However, qualitative analysis is a laborious and resource-intensive process. To achieve depth, researchers are limited to smaller sample sizes when analyzing text data. One potential method to address this concern is natural language processing (NLP). Qualitative text analysis involves researchers reading data, assigning code labels, and iteratively developing findings; NLP has the potential to automate part of this process. Unfortunately, little methodological research has been done to compare automatic coding using NLP techniques and qualitative coding, which is critical to establish the viability of NLP as a useful, rigorous analysis procedure. The purpose of this study was to compare the utility of a traditional qualitative text analysis, an NLP analysis, and an augmented approach that combines qualitative and NLP methods. We conducted a 2-arm cross-over experiment to compare qualitative and NLP approaches to analyze data generated through 2 text (short message service) message survey questions, one about prescription drugs and the other about police interactions, sent to youth aged 14-24 years. We randomly assigned a question to each of the 2 experienced qualitative analysis teams for independent coding and analysis before receiving NLP results. A third team separately conducted NLP analysis of the same 2 questions. We examined the results of our analyses to compare (1) the similarity of findings derived, (2) the quality of inferences generated, and (3) the time spent in analysis. The qualitative-only analysis for the drug question (n=58) yielded 4 major findings, whereas the NLP analysis yielded 3 findings that missed contextual elements. The qualitative and NLP-augmented analysis was the most comprehensive. For the police question (n=68), the qualitative-only analysis yielded 4 primary findings and the NLP-only analysis yielded 4 slightly different findings. Again, the augmented qualitative and NLP analysis was the most comprehensive and produced the highest quality inferences, increasing our depth of understanding (ie, details and frequencies). In terms of time, the NLP-only approach was quicker than the qualitative-only approach for the drug (120 vs 270 minutes) and police (40 vs 270 minutes) questions. An approach beginning with qualitative analysis followed by qualitative- or NLP-augmented analysis took longer time than that beginning with NLP for both drug (450 vs 240 minutes) and police (390 vs 220 minutes) questions. NLP provides both a foundation to code qualitatively more quickly and a method to validate qualitative findings. NLP methods were able to identify major themes found with traditional qualitative analysis but were not useful in identifying nuances. Traditional qualitative text analysis added important details and context. ©Timothy C Guetterman, Tammy Chang, Melissa DeJonckheere, Tanmay Basu, Elizabeth Scruggs, VG Vinod Vydiswaran. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.06.2018.
Primary and Secondary Yield Losses Caused by Pests and Diseases: Assessment and Modeling in Coffee
Gary, Christian; Tixier, Philippe; Lechevallier, Esther
2017-01-01
The assessment of crop yield losses is needed for the improvement of production systems that contribute to the incomes of rural families and food security worldwide. However, efforts to quantify yield losses and identify their causes are still limited, especially for perennial crops. Our objectives were to quantify primary yield losses (incurred in the current year of production) and secondary yield losses (resulting from negative impacts of the previous year) of coffee due to pests and diseases, and to identify the most important predictors of coffee yields and yield losses. We established an experimental coffee parcel with full-sun exposure that consisted of six treatments, which were defined as different sequences of pesticide applications. The trial lasted three years (2013–2015) and yield components, dead productive branches, and foliar pests and diseases were assessed as predictors of yield. First, we calculated yield losses by comparing actual yields of specific treatments with the estimated attainable yield obtained in plots which always had chemical protection. Second, we used structural equation modeling to identify the most important predictors. Results showed that pests and diseases led to high primary yield losses (26%) and even higher secondary yield losses (38%). We identified the fruiting nodes and the dead productive branches as the most important and useful predictors of yields and yield losses. These predictors could be added in existing mechanistic models of coffee, or can be used to develop new linear mixed models to estimate yield losses. Estimated yield losses can then be related to production factors to identify corrective actions that farmers can implement to reduce losses. The experimental and modeling approaches of this study could also be applied in other perennial crops to assess yield losses. PMID:28046054
Primary and Secondary Yield Losses Caused by Pests and Diseases: Assessment and Modeling in Coffee.
Cerda, Rolando; Avelino, Jacques; Gary, Christian; Tixier, Philippe; Lechevallier, Esther; Allinne, Clémentine
2017-01-01
The assessment of crop yield losses is needed for the improvement of production systems that contribute to the incomes of rural families and food security worldwide. However, efforts to quantify yield losses and identify their causes are still limited, especially for perennial crops. Our objectives were to quantify primary yield losses (incurred in the current year of production) and secondary yield losses (resulting from negative impacts of the previous year) of coffee due to pests and diseases, and to identify the most important predictors of coffee yields and yield losses. We established an experimental coffee parcel with full-sun exposure that consisted of six treatments, which were defined as different sequences of pesticide applications. The trial lasted three years (2013-2015) and yield components, dead productive branches, and foliar pests and diseases were assessed as predictors of yield. First, we calculated yield losses by comparing actual yields of specific treatments with the estimated attainable yield obtained in plots which always had chemical protection. Second, we used structural equation modeling to identify the most important predictors. Results showed that pests and diseases led to high primary yield losses (26%) and even higher secondary yield losses (38%). We identified the fruiting nodes and the dead productive branches as the most important and useful predictors of yields and yield losses. These predictors could be added in existing mechanistic models of coffee, or can be used to develop new linear mixed models to estimate yield losses. Estimated yield losses can then be related to production factors to identify corrective actions that farmers can implement to reduce losses. The experimental and modeling approaches of this study could also be applied in other perennial crops to assess yield losses.
Novel agents and approaches for stem cell mobilization in normal donors and patients.
Bakanay, Ş M; Demirer, T
2012-09-01
In spite of the safety and efficiency of the classical mobilization protocols, recombinant human G-CSF±chemotherapy, there is still a considerable amount of mobilization failures (10-30%), which warrant novel agents and approaches both in an autologous and an allogeneic transplant setting. Attempts to improve CD34+ yields by using several cytokines and growth factors as adjuncts to G-CSF could not change the standard approaches during the last decade, either because of inefficiency or the adverse events encountered with these agents. As a long-acting G-CSF analog, pegfilgrastim has the advantages of an earlier start of apheresis, reduction in the number of apheresis procedures as well as a reduced number of injections as compared with unconjugated G-CSF. However, dosing and cost-effectiveness especially in cytokine-only mobilizations require further investigation. As interactions between hematopoietic stem cells and the BM microenvironment are better understood, new molecules targeting these interactions are emerging. Plerixafor, which started its journey as an anti-HIV drug, recently ended up being a popular stem cell mobilizer with the ability of rapid mobilization and gained approval as an adjunct to G-CSF for poor mobilizers. At present, it is challenging to search for the best approach by using the available drugs with appropriate timing to provide sufficient CD34+ yield after an initial mobilization attempt, and in a cost-effective manner thereby avoiding further mobilization attempts and exposure to chemotherapy. Approaches not only for increasing stem cell yield, but also aiming to improve the quality of graft content and the associated transplantation outcomes are promising areas of research.
Mutation Breeding of β-carotene Producing Strain B. trispora by Low Energy Ion Implantation
NASA Astrophysics Data System (ADS)
Zhang, Ning; Yu, Long
2009-02-01
Ion beam bioengineering technology as a new mutation approach has been widely used in the biological breeding field. In this paper the application of low energy nitrogen ion implantation in the β-carotene producing strain, Blakeslea trispora(-) was investigated. The effects of different fermentation conditions on β-carotene production by a high yield strain were examined. Results showed that two β-carotene high yielding strains B.trispora(-) BH3-701 and BH3-728 were screened out and the averaged production of β-carotene was raised by 178.7% and 164.6% respectively after five passages in the shaking flasks. Compared with the original strain, the highest yield strain BH3-701 was potent in accumulating β-carotene, especially in the later stage, and greatly increased production efficiency.
Touliatos, Dionysios; Dodd, Ian C; McAinsh, Martin
2016-08-01
Vertical farming systems (VFS) have been proposed as an engineering solution to increase productivity per unit area of cultivated land by extending crop production into the vertical dimension. To test whether this approach presents a viable alternative to horizontal crop production systems, a VFS (where plants were grown in upright cylindrical columns) was compared against a conventional horizontal hydroponic system (HHS) using lettuce ( Lactuca sativa L . cv. "Little Gem") as a model crop. Both systems had similar root zone volume and planting density. Half-strength Hoagland's solution was applied to plants grown in perlite in an indoor controlled environment room, with metal halide lamps providing artificial lighting. Light distribution (photosynthetic photon flux density, PPFD) and yield (shoot fresh weight) within each system were assessed. Although PPFD and shoot fresh weight decreased significantly in the VFS from top to base, the VFS produced more crop per unit of growing floor area when compared with the HHS. Our results clearly demonstrate that VFS presents an attractive alternative to horizontal hydroponic growth systems and suggest that further increases in yield could be achieved by incorporating artificial lighting in the VFS.
Carbonetto, Peter; Stephens, Matthew
2013-01-01
Pathway analyses of genome-wide association studies aggregate information over sets of related genes, such as genes in common pathways, to identify gene sets that are enriched for variants associated with disease. We develop a model-based approach to pathway analysis, and apply this approach to data from the Wellcome Trust Case Control Consortium (WTCCC) studies. Our method offers several benefits over existing approaches. First, our method not only interrogates pathways for enrichment of disease associations, but also estimates the level of enrichment, which yields a coherent way to promote variants in enriched pathways, enhancing discovery of genes underlying disease. Second, our approach allows for multiple enriched pathways, a feature that leads to novel findings in two diseases where the major histocompatibility complex (MHC) is a major determinant of disease susceptibility. Third, by modeling disease as the combined effect of multiple markers, our method automatically accounts for linkage disequilibrium among variants. Interrogation of pathways from eight pathway databases yields strong support for enriched pathways, indicating links between Crohn's disease (CD) and cytokine-driven networks that modulate immune responses; between rheumatoid arthritis (RA) and “Measles” pathway genes involved in immune responses triggered by measles infection; and between type 1 diabetes (T1D) and IL2-mediated signaling genes. Prioritizing variants in these enriched pathways yields many additional putative disease associations compared to analyses without enrichment. For CD and RA, 7 of 8 additional non-MHC associations are corroborated by other studies, providing validation for our approach. For T1D, prioritization of IL-2 signaling genes yields strong evidence for 7 additional non-MHC candidate disease loci, as well as suggestive evidence for several more. Of the 7 strongest associations, 4 are validated by other studies, and 3 (near IL-2 signaling genes RAF1, MAPK14, and FYN) constitute novel putative T1D loci for further study. PMID:24098138
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Moore, S. C.
2001-06-01
Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.
Interannual variability of crop water footprint
NASA Astrophysics Data System (ADS)
Tuninetti, M.; Tamea, S.; Laio, F.; Ridolfi, L.
2016-12-01
The crop water footprint, CWF, is a useful tool to investigate the water-food nexus, since it measures the water requirement for crop production. Heterogeneous spatial patterns of climatic conditions and agricultural practices have inspired a flourishing literature on the geographic assessment of CWF, mostly referred to a fixed (time-averaged) period. However, given that both climatic conditions and crop yield may vary substantially over time, also the CWF temporal dynamics need to be addressed. As other studies have done, we base the CWF variability on yield, while keeping the crop evapotranspiration constant over time. As a new contribution, we prove the feasibility of this approach by comparing these CWF estimates with the results obtained with a full model considering variations of crop evapotranspiration: overall, the estimates compare well showing high coefficients of determination that read 0.98 for wheat, 0.97 for rice, 0.97 for maize, and 0.91 for soybean. From this comparison, we derive also the precision of the method, which is around ±10% that is higher than the precision of the model used to evaluate the crop evapotranspiration (i.e., ±30%). Over the period between 1961 and 2013, the CWF of the most cultivated grains has sharply decreased on a global basis (i.e., -68% for wheat, -62% for rice, -66% for maize, and -52% for soybean), mainly driven by enhanced yield values. The higher water use efficiency in crop production implies a reduced virtual displacement of embedded water per ton of traded crop and as a result, the temporal variability of virtual water trade is different if considering constant or time-varying CWF. The proposed yield-based approach to estimate the CWF variability implies low computational costs and requires limited input data, thus, it represents a promising tool for time-dependent water footprint assessments.
Jin, Jing; Allison, Brendan Z; Kaufmann, Tobias; Kübler, Andrea; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
2012-01-01
One of the most common types of brain-computer interfaces (BCIs) is called a P300 BCI, since it relies on the P300 and other event-related potentials (ERPs). In the canonical P300 BCI approach, items on a monitor flash briefly to elicit the necessary ERPs. Very recent work has shown that this approach may yield lower performance than alternate paradigms in which the items do not flash but instead change in other ways, such as moving, changing colour or changing to characters overlaid with faces. The present study sought to extend this research direction by parametrically comparing different ways to change items in a P300 BCI. Healthy subjects used a P300 BCI across six different conditions. Three conditions were similar to our prior work, providing the first direct comparison of characters flashing, moving, and changing to faces. Three new conditions also explored facial motion and emotional expression. The six conditions were compared across objective measures such as classification accuracy and bit rate as well as subjective measures such as perceived difficulty. In line with recent studies, our results indicated that the character flash condition resulted in the lowest accuracy and bit rate. All four face conditions (mean accuracy >91%) yielded significantly better performance than the flash condition (mean accuracy = 75%). Objective results reaffirmed that the face paradigm is superior to the canonical flash approach that has dominated P300 BCIs for over 20 years. The subjective reports indicated that the conditions that yielded better performance were not considered especially burdensome. Therefore, although further work is needed to identify which face paradigm is best, it is clear that the canonical flash approach should be replaced with a face paradigm when aiming at increasing bit rate. However, the face paradigm has to be further explored with practical applications particularly with locked-in patients.
Cousineau, Michael R; Stevens, Gregory D; Farias, Albert
2011-02-01
OBJECTIVE AND STUDY SETTING: To evaluate the effectiveness of different approaches to outreach on public health insurance enrollment in 25 California counties with a Children's Health Initiative. Administrative enrollment databases. The use of eight enrollment strategies were identified in each quarter from 2001 to 2007 for each of 25 counties (county quarter). Strategies were categorized as either technology or nontechnology. New enrollments were obtained for Medi-Cal, Healthy Families, and Healthy Kids. Bivariate and multivariate analyses assessed the link between each strategy and new enrollments rates of children. Methods Surveys of key informants determined whether a specific outreach strategy was used in each quarter. These were linked to new enrollments in each county quarter. Between 2001 and 2007, enrollment grew in all three children's health programs. We controlled for the effects of counties, seasons, and county-specific child poverty rates. There was an increase in enrollment rates of 11 percent in periods when technology-based systems were in use compared with when these approaches were inactive. Non-technology-based approaches, including school-linked approaches, yielded a 12 percent increase in new enrollments rates. Deploying seven to eight strategies yielded 54 percent more new enrollments per 10,000 children compared with periods with none of the specific strategies. National health care reform provides new opportunities to expand coverage to millions of Americans. An investment in technology-based enrollment systems will maximize new enrollments, particularly into Medicaid; nontechnological approaches may help identify harder-to-reach populations. Moreover, incorporating several strategies, whether phased in or implemented simultaneously, will enhance enrollments. © Health Research and Educational Trust.
Characterizing bias correction uncertainty in wheat yield predictions
NASA Astrophysics Data System (ADS)
Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam
2017-04-01
Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.
NASA Astrophysics Data System (ADS)
Campana, P. E.; Zhang, J.; Yao, T.; Melton, F. S.; Yan, J.
2017-12-01
Climate change and drought have severe impacts on the agricultural sector affecting crop yields, water availability, and energy consumption for irrigation. Monitoring, assessing and mitigating the effects of climate change and drought on the agricultural and energy sectors are fundamental challenges that require investigation for water, food, and energy security issues. Using an integrated water-food-energy nexus approach, this study is developing a comprehensive drought management system through integration of real-time drought monitoring with real-time irrigation management. The spatially explicit model developed, GIS-OptiCE, can be used for simulation, multi-criteria optimization and generation of forecasts to support irrigation management. To demonstrate the value of the approach, the model has been applied to one major corn region in Nebraska to study the effects of the 2012 drought on crop yield and irrigation water/energy requirements as compared to a wet year such as 2009. The water-food-energy interrelationships evaluated show that significant water volumes and energy are required to halt the negative effects of drought on the crop yield. The multi-criteria optimization problem applied in this study indicates that the optimal solutions of irrigation do not necessarily correspond to those that would produce the maximum crop yields, depending on both water and economic constraints. In particular, crop pricing forecasts are extremely important to define the optimal irrigation management strategy. The model developed shows great potential in precision agriculture by providing near real-time data products including information on evapotranspiration, irrigation volumes, energy requirements, predicted crop growth, and nutrient requirements.
From Gain Score t to ANCOVA F (and Vice Versa)
ERIC Educational Resources Information Center
Knapp, Thomas R.; Schafer, William D.
2009-01-01
Although they test somewhat different hypotheses, analysis of gain scores (or its repeated-measures analog) and analysis of covariance are both common methods that researchers use for pre-post data. The results of the two approaches yield non-comparable outcomes, but since the same generic data are used, it is possible to transform the test…
ERIC Educational Resources Information Center
Banks, Reginald; Hogue, Aaron; Liddle, Howard; Timberlake, Terri
1996-01-01
Compared the effectiveness for inner-city African-American youth (n=64) of two social-skills training curricula focusing on problem solving, anger management, and conflict resolution. Both the Afrocentric curriculum and the one that was merely culturally relevant yielded similar decreases in anger and increases in assertiveness and self-control.…
Assessing pretreatment reactor scaling through empirical analysis
Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik; ...
2016-10-10
Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was within the near-optimal space for total sugar yield for the LHR. This indicates that the ASE is a good tool for cost effectively finding near-optimal conditions for operating pilot-scale systems, which may be used as starting points for further optimization. Additionally, using a severity-factor approach to optimization was found to be inadequate compared to a multivariate optimization method. As a result, the ASE and the LHR were able to enable significantly higher total sugar yields after enzymatic hydrolysis relative to the ZCR, despite having similar optimal conditions and total xylose yields. This underscores the importance of incorporating mechanical disruption into pretreatment reactor designs to achieve high enzymatic digestibilities.« less
Assessing pretreatment reactor scaling through empirical analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik
Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was within the near-optimal space for total sugar yield for the LHR. This indicates that the ASE is a good tool for cost effectively finding near-optimal conditions for operating pilot-scale systems, which may be used as starting points for further optimization. Additionally, using a severity-factor approach to optimization was found to be inadequate compared to a multivariate optimization method. As a result, the ASE and the LHR were able to enable significantly higher total sugar yields after enzymatic hydrolysis relative to the ZCR, despite having similar optimal conditions and total xylose yields. This underscores the importance of incorporating mechanical disruption into pretreatment reactor designs to achieve high enzymatic digestibilities.« less
Linear unmixing of multidate hyperspectral imagery for crop yield estimation
USDA-ARS?s Scientific Manuscript database
In this paper, we have evaluated an unsupervised unmixing approach, vertex component analysis (VCA), for the application of crop yield estimation. The results show that abundance maps of the vegetation extracted by the approach are strongly correlated to the yield data (the correlation coefficients ...
Fusion yield: Guderley model and Tsallis statistics
NASA Astrophysics Data System (ADS)
Haubold, H. J.; Kumar, D.
2011-02-01
The reaction rate probability integral is extended from Maxwell-Boltzmann approach to a more general approach by using the pathway model introduced by Mathai in 2005 (A pathway to matrix-variate gamma and normal densities. Linear Algebr. Appl. 396, 317-328). The extended thermonuclear reaction rate is obtained in the closed form via a Meijer's G-function and the so-obtained G-function is represented as a solution of a homogeneous linear differential equation. A physical model for the hydrodynamical process in a fusion plasma-compressed and laser-driven spherical shock wave is used for evaluating the fusion energy integral by integrating the extended thermonuclear reaction rate integral over the temperature. The result obtained is compared with the standard fusion yield obtained by Haubold and John in 1981 (Analytical representation of the thermonuclear reaction rate and fusion energy production in a spherical plasma shock wave. Plasma Phys. 23, 399-411). An interpretation for the pathway parameter is also given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
Sputtering of Lunar Regolith Simulant by Protons and Multicharged Heavy Ions at Solar Wind Energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Fred W; Harris, Peter R; Taylor, C. N.
2011-01-01
We report preliminary results on sputtering of a lunar regolith simulant at room temperature by singly and multiply charged solar wind ions using quadrupole and time-of-flight (TOF) mass spectrometry approaches. Sputtering of the lunar regolith by solar-wind heavy ions may be an important particle source that contributes to the composition of the lunar exosphere, and is a possible mechanism for lunar surface ageing and compositional modification. The measurements were performed in order to assess the relative sputtering efficiency of protons, which are the dominant constituent of the solar wind, and less abundant heavier multicharged solar wind constituents, which have highermore » physical sputtering yields than same-velocity protons, and whose sputtering yields may be further enhanced due to potential sputtering. Two different target preparation approaches using JSC-1A AGGL lunar regolith simulant are described and compared using SEM and XPS surface analysis.« less
NASA Astrophysics Data System (ADS)
Shalin, A. S.
2010-12-01
The boundary problem of light reflection and transmission by a film with chaotically distributed nanoinclusions is considered. Based on the proposed microscopic approach, analytic expressions are derived for distributions inside and outside the nanocomposite medium. Good agreement of the results with exact calculations and (at low concentrations of nanoparticles) with the integral Maxwell-Garnett effective-medium theory is demonstrated. It is shown that at high nanoparticle concentrations, averaging the dielectric constant in volume as is done within the framework of the effective-medium theory yields overestimated values of the optical film density compared to the values yielded by the proposed microscopic approach. We also studied the dependence of the reflectivity of a system of gold nanoparticles on their size, the size dependence of the plasmon resonance position along the wavelength scale, and demonstrated a good agreement with experimental data.
A coupled thermo-mechanical pseudo inverse approach for preform design in forging
NASA Astrophysics Data System (ADS)
Thomas, Anoop Ebey; Abbes, Boussad; Li, Yu Ming; Abbes, Fazilay; Guo, Ying-Qiao; Duval, Jean-Louis
2017-10-01
Hot forging is a process used to form difficult to form materials as well as to achieve complex geometries. This is possible due to the reduction of yield stress at high temperatures and a subsequent increase in formability. Numerical methods have been used to predict the material yield and the stress/strain states of the final product. Pseudo Inverse Approach (PIA) developed in the context of cold forming provides a quick estimate of the stress and strain fields in the final product for a given initial shape. In this paper, PIA is extended to include the thermal effects on the forging process. A Johnson-Cook thermo-viscoplastic material law is considered and a staggered scheme is employed for the coupling between the mechanical and thermal problems. The results are compared with available commercial codes to show the efficiency and the limitations of PIA.
Benefit-risk Evaluation for Diagnostics: A Framework (BED-FRAME).
Evans, Scott R; Pennello, Gene; Pantoja-Galicia, Norberto; Jiang, Hongyu; Hujer, Andrea M; Hujer, Kristine M; Manca, Claudia; Hill, Carol; Jacobs, Michael R; Chen, Liang; Patel, Robin; Kreiswirth, Barry N; Bonomo, Robert A
2016-09-15
The medical community needs systematic and pragmatic approaches for evaluating the benefit-risk trade-offs of diagnostics that assist in medical decision making. Benefit-Risk Evaluation of Diagnostics: A Framework (BED-FRAME) is a strategy for pragmatic evaluation of diagnostics designed to supplement traditional approaches. BED-FRAME evaluates diagnostic yield and addresses 2 key issues: (1) that diagnostic yield depends on prevalence, and (2) that different diagnostic errors carry different clinical consequences. As such, evaluating and comparing diagnostics depends on prevalence and the relative importance of potential errors. BED-FRAME provides a tool for communicating the expected clinical impact of diagnostic application and the expected trade-offs of diagnostic alternatives. BED-FRAME is a useful fundamental supplement to the standard analysis of diagnostic studies that will aid in clinical decision making. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Li, Zhen-Yu; Zhang, Sha-Sha; Jie-Xing; Qin, Xue-Mei
2015-01-01
In this study, an ionic liquids (ILs) based extraction approach has been successfully applied to the extraction of essential oil from Farfarae Flos, and the effect of lithium chloride was also investigated. The results indicated that the oil yields can be increased by the ILs, and the extraction time can be reduced significantly (from 4h to 2h), compared with the conventional water distillation. The addition of lithium chloride showed different effect according to the structures of ILs, and the oil yields may be related with the structure of cation, while the chemical compositions of essential oil may be related with the anion. The reduction of extraction time and remarkable higher efficiency (5.41-62.17% improved) by combination of lithium salt and proper ILs supports the suitability of the proposed approach. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chahbi, Aicha; Zribi, Mehrez; Lili-Chabaane, Zohra
2016-04-01
In arid and semi-arid areas, population growth, urbanization, food security and climate change have an impact on agriculture in general and particular on the cereal production. Therefore to improve food security in arid countries, crop canopy monitoring and yield forecasting cereals are needed. Many models, based on the use of remote sensing or agro-meteorological models, have been developed to estimate the biomass and grain yield of cereals. Through the use of a rich database, acquired over a period of two years for more than 80 test fields, and from optical satellite SPOT/HRV images, the aim of the present study is to evaluate the feasibility of two yield prediction approaches. The first approach is based on the application of the semi-empirical growth model SAFY, developed to simulate the dynamics of the LAI and the grain yield, at the field scale. The model is able to reproduce the time evolution of the leaf area index of all fields with acceptable error. However, an inter-comparison between ground yield measurements and SAFY model simulations reveals that the yields are under-estimated by this model. We can explain the limits of the semi-empirical model SAFY by its simplicity and also by various factors that were not considered (fertilization, irrigation,...). To improve the yield estimation, a new approach is proposed: the grain yield is estimated in function of the LAI in the growth period between 25 March and 5 April. The LAI of this period is estimated by SAFY model. A linear relationship is developed between the measured grain yield and the LAI area of the maximum growth period.This approach is robust, the measured and estimated grain yields are well correlated. Following the validation of this approach, yield estimations are proposed for the entire studied site using the SPOT/HRV images.
Role of the N*(1535) in the J/{psi}{yields}p{eta}p and J/{psi}{yields}pK{sup +}{lambda} reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, L. S.; Oset, E.; Zou, B. S.
2009-02-15
We study the J/{psi}{yields}p{eta}p and J/{psi}{yields}pK{sup +}{lambda} reactions with a unitary chiral approach. We find that the unitary chiral approach, which generates the N*(1535) dynamically, can describe the data reasonably well, particularly the ratio of the integrated cross sections. This study provides further support for the unitary chiral description of the N*(1535). We also discuss some subtle differences between the coupling constants determined from the unitary chiral approach and those determined from phenomenological studies.
NASA Astrophysics Data System (ADS)
Navon, M. I.; Stefanescu, R.
2013-12-01
Previous assimilation of lightning used nudging approaches. We develop three approaches namely, 3D-VAR WRFDA and1D+nD-VAR (n=3,4) WRFDA . The present research uses Convective Available Potential Energy (CAPE) as a proxy between lightning data and model variables. To test performance of aforementioned schemes, we assess quality of resulting analysis and forecasts of precipitation compared to those from a control experiment and verify them against NCEP stage IV precipitation. Results demonstrate that assimilating lightning observations improves precipitation statistics during the assimilation window and for 3-7 h thereafter. The 1D+4D-VAR approach yielded the best performance significantly improving precipitation rmse errors by 25% and 27.5%,compared to control during the assimilation window for two tornadic test cases. Finally we propose a new approach to assimilate 2-D images of lightning flashes based on pixel intensity, mitigating dimensionality by a reduced order method.
A New Approach to Simulate Groundwater Table Dynamics and Its Validation in China
NASA Astrophysics Data System (ADS)
Lv, M.; Lu, H.; Dan, L.; Yang, K.
2017-12-01
The groundwater has very important role in hydrology-climate-human activity interaction. But the groundwater table dynamics currently is not well simulated in global-scale land surface models. Meanwhile, almost all groundwater schemes are adopting a specific yield method to estimate groundwater table, in which how to determine the proper specific yield value remains a big challenge. In this study, we developed a Soil Moisture Correlation (SMC) method to simulate groundwater table dynamics. We coupled SMC with a hydrological model (named as NEW) and compared it with the original model in which a specific yield method is used (named as CTL). Both NEW and CTL were tested in Tangnaihai Subbasin of Yellow River and Jialingjiang Subbasin along Yangtze River, where underground water is less impacted by human activities. The simulated discharges by NEW and CTL are compared against gauge observations. The comparison results reveal that after calibration both models are able to reproduce the discharge well. However, there is no parameter needed to be calibrated for SMC. It indicates that SMC method is more efficient and easy-to-use than the specific yield method. Since there is no direct groundwater table observation in these two basins, simulated groundwater table were compared with a global data set provided by Fan et al. (2013). Both NEW and CTL estimate lower depths than Fan does. Moreover, when comparing the variation of terrestrial water storage (TWS) derived from NEW with that observed by GRACE, good agreements were confirmed. It demonstrated that SMC method is able to reproduce groundwater level dynamics reliably.
Rooting for food security in Sub-Saharan Africa
NASA Astrophysics Data System (ADS)
Guilpart, Nicolas; Grassini, Patricio; van Wart, Justin; Yang, Haishun; van Ittersum, Martin K.; van Bussel, Lenny G. J.; Wolf, Joost; Claessens, Lieven; Leenaars, Johan G. B.; Cassman, Kenneth G.
2017-11-01
There is a persistent narrative about the potential of Sub-Saharan Africa (SSA) to be a ‘grain breadbasket’ because of large gaps between current low yields and yield potential with good management, and vast land resources with adequate rainfall. However, rigorous evaluation of the extent to which soils can support high, stable yields has been limited by lack of data on rootable soil depth of sufficient quality and spatial resolution. Here we use location-specific climate data, a robust spatial upscaling approach, and crop simulation to assess sensitivity of rainfed maize yields to root-zone water holding capacity. We find that SSA could produce a modest maize surplus but only if rootable soil depths are comparable to that of other major breadbaskets, such as the US Corn Belt and South American Pampas, which is unlikely based on currently available information. Otherwise, producing surplus grain for export will depend on expansion of crop area with the challenge of directing this expansion to regions where soil depth and rainfall are supportive of high and consistent yields, and where negative impacts on biodiversity are minimal.
Using Landsat to provide potato production estimates to Columbia Basin farmers and processors
NASA Technical Reports Server (NTRS)
1990-01-01
A summary of project activities relative to the estimation of potato yields in the Columbia Basin is given. Oregon State University is using a two-pronged approach to yield estimation, one using simulation models and the other using purely empirical models. The simulation modeling approach has used satellite observations to determine key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Two empirical modeling approaches are illustrated. One relates tuber yield to estimates of cumulative intercepted solar radiation; the other relates tuber yield to the integral under the GVI curve.
Xiao Jia; Meng, Max Q-H
2017-07-01
Gastrointestinal (GI) bleeding detection plays an essential role in wireless capsule endoscopy (WCE) examination. In this paper, we present a new approach for WCE bleeding detection that combines handcrafted (HC) features and convolutional neural network (CNN) features. Compared with our previous work, a smaller-scale CNN architecture is constructed to lower the computational cost. In experiments, we show that the proposed strategy is highly capable when training data is limited, and yields comparable or better results than the latest methods.
Pattern and Process in the Comparative Study of Convergent Evolution.
Mahler, D Luke; Weber, Marjorie G; Wagner, Catherine E; Ingram, Travis
2017-08-01
Understanding processes that have shaped broad-scale biodiversity patterns is a fundamental goal in evolutionary biology. The development of phylogenetic comparative methods has yielded a tool kit for analyzing contemporary patterns by explicitly modeling processes of change in the past, providing neontologists tools for asking questions previously accessible only for select taxa via the fossil record or laboratory experimentation. The comparative approach, however, differs operationally from alternative approaches to studying convergence in that, for studies of only extant species, convergence must be inferred using evolutionary process models rather than being directly measured. As a result, investigation of evolutionary pattern and process cannot be decoupled in comparative studies of convergence, even though such a decoupling could in theory guard against adaptationist bias. Assumptions about evolutionary process underlying comparative tools can shape the inference of convergent pattern in sometimes profound ways and can color interpretation of such patterns. We discuss these issues and other limitations common to most phylogenetic comparative approaches and suggest ways that they can be avoided in practice. We conclude by promoting a multipronged approach to studying convergence that integrates comparative methods with complementary tests of evolutionary mechanisms and includes ecological and biogeographical perspectives. Carefully employed, the comparative method remains a powerful tool for enriching our understanding of convergence in macroevolution, especially for investigation of why convergence occurs in some settings but not others.
Lopes, Sidnei Antônio; Paulino, Mário Fonseca; Detmann, Edenio; Valente, Ériton Egídio Lisboa; de Barros, Lívia Vieira; Rennó, Luciana Navajas; de Campos Valadares Filho, Sebastião; Martins, Leandro Soares
2016-08-01
The aim of this study was to evaluate the effects of beef calves' supplementation in creep feeding systems on milk yield, body weight (BW), and body condition score (BCS) of their dams on tropical pastures using a meta-analytical approach. The database was obtained from 11 experiments conducted between 2009 and 2014 in Brazil, totaling 485 observations (cows). The database consisted of 273 Nellore and 212 crossbred (7/8 Nellore × 1/8 Holstein) cows. All experiments were carried out in the suckling phase (from 3 to 8 months of age of calves) during the transition phase between rainy and dry seasons from February to June of different years. The data were analyzed by a meta-analytical approach using mixed models and taking into account random variation among experiments. Calves' supplementation (P ≥ 0.59) and the calves' sex (P ≥ 0.48) did not affect milk yield of cows. The average fat-corrected milk (FCM) yield was 6.71 and 6.83 kg/day for cows that had their calves supplemented and not supplemented, respectively. Differences were observed (P < 0.0001) for milk yield due to the genetic group where crossbred cows presented greater FCM yield (7.37 kg/day) compared with Nellore cows (6.17 kg/day). There was no effect of the calves' supplementation on BW change (P ≥ 0.11) and BCS change (P ≥ 0.23) of the cows. Therefore, it is concluded that supplementation of beef calves using creep feeding systems in tropical pastures does not affect milk yield, body weight, or body condition of their dams.
High-yield maize with large net energy yield and small global warming intensity
Grassini, Patricio; Cassman, Kenneth G.
2012-01-01
Addressing concerns about future food supply and climate change requires management practices that maximize productivity per unit of arable land while reducing negative environmental impact. On-farm data were evaluated to assess energy balance and greenhouse gas (GHG) emissions of irrigated maize in Nebraska that received large nitrogen (N) fertilizer (183 kg of N⋅ha−1) and irrigation water inputs (272 mm or 2,720 m3 ha−1). Although energy inputs (30 GJ⋅ha−1) were larger than those reported for US maize systems in previous studies, irrigated maize in central Nebraska achieved higher grain and net energy yields (13.2 Mg⋅ha−1 and 159 GJ⋅ha−1, respectively) and lower GHG-emission intensity (231 kg of CO2e⋅Mg−1 of grain). Greater input-use efficiencies, especially for N fertilizer, were responsible for better performance of these irrigated systems, compared with much lower-yielding, mostly rainfed maize systems in previous studies. Large variation in energy inputs and GHG emissions across irrigated fields in the present study resulted from differences in applied irrigation water amount and imbalances between applied N inputs and crop N demand, indicating potential to further improve environmental performance through better management of these inputs. Observed variation in N-use efficiency, at any level of applied N inputs, suggests that an N-balance approach may be more appropriate for estimating soil N2O emissions than the Intergovernmental Panel on Climate Change approach based on a fixed proportion of applied N. Negative correlation between GHG-emission intensity and net energy yield supports the proposition that achieving high yields, large positive energy balance, and low GHG emissions in intensive cropping systems are not conflicting goals. PMID:22232684
NASA Astrophysics Data System (ADS)
Jayanthi, Harikishan
The focus of this research was two-fold: (1) extend the reflectance-based crop coefficient approach to non-grain (potato and sugar beet), and vegetable crops (bean), and (2) develop vegetation index (VI)-yield statistical models for potato and sugar beet crops using high-resolution aerial multispectral imagery. Extensive crop biophysical sampling (leaf area index and aboveground dry biomass sampling) and canopy reflectance measurements formed the backbone of developing of canopy reflectance-based crop coefficients for bean, potato, and sugar beet crops in this study. Reflectance-based crop coefficient equations were developed for the study crops cultivated in Kimberly, Idaho, and subsequently used in water availability simulations in the plant root zone during 1998 and 1999 seasons. The simulated soil water profiles were compared with independent measurements of actual soil water profiles in the crop root zone in selected fields. It is concluded that the canopy reflectance-based crop coefficient technique can be successfully extended to non-grain crops as well. While the traditional basal crop coefficients generally expect uniform growth in a region the reflectance-based crop coefficients represent the actual crop growth pattern (in less than ideal water availability conditions) in individual fields. Literature on crop canopy interactions with sunlight states that there is a definite correspondence between leaf area index progression in the season and the final yield. In case of crops like potato and sugar beet, the yield is influenced not only on how early and how quickly the crop establishes its canopy but also on how long the plant stands on the ground in a healthy state. The integrated area under the crop growth curve has shown excellent correlations with hand-dug samples of potato and sugar beet crops in this research. Soil adjusted vegetation index-yield models were developed, and validated using multispectral aerial imagery. Estimated yield images were compared with the actual yields extracted from the ground. The remote sensing-derived yields compared well with the actual yields sampled on the ground. This research has highlighted the importance of the date of spectral emergence, the need to know the duration for which the crops stand on the ground, and the need to identify critical periods of time when multispectral coverages are essential for reliable tuber yield estimation.
Motivational interventions in community hypertension screening.
Stahl, S M; Lawrie, T; Neill, P; Kelley, C
1977-01-01
To evaluate different techniques intended to motivate community residents to have their blood pressures taken, five inner-city target areas with comparable, predominantly Black, populations were selected. A sample of about 200 households in each of four areas were subjected to different motivational interventions; in one of these four areas, households were approached in a series of four sequential steps. The fifth target area served as a control. Findings establish that home visits by community members trained to take blood pressure measurements (BPMs) in the home produces much larger yields of new (previously unknown) hypertensives than more passive techniques such as invitational letters and gift offers. Prior informational letters, including letters specifying time of visit, do not affect refusals or increase the yield. More "passive" motivational techniques yield a higher proportion of previously known hypertensives than the more "active" outreach efforts. PMID:848618
Motivational interventions in community hypertension screening.
Stahl, S M; Lawrie, T; Neill, P; Kelley, C
1977-04-01
To evaluate different techniques intended to motivate community residents to have their blood pressures taken, five inner-city target areas with comparable, predominantly Black, populations were selected. A sample of about 200 households in each of four areas were subjected to different motivational interventions; in one of these four areas, households were approached in a series of four sequential steps. The fifth target area served as a control. Findings establish that home visits by community members trained to take blood pressure measurements (BPMs) in the home produces much larger yields of new (previously unknown) hypertensives than more passive techniques such as invitational letters and gift offers. Prior informational letters, including letters specifying time of visit, do not affect refusals or increase the yield. More "passive" motivational techniques yield a higher proportion of previously known hypertensives than the more "active" outreach efforts.
Guo, Wei; Feng, Jinfei; Li, Lanhai; Yang, Haishui; Wang, Xiaohua; Bian, Xinmin
2014-01-01
Drip irrigation is broadly extended in order to save water in the arid cotton production region of China. Biochar is thought to be a useful soil amendment to reduce greenhouse gas (GHG) emissions. Here, a field study was conducted to compare the emissions of nitrous oxide (N2O) and methane (CH4) under different irrigation methods (drip irrigation (D) and furrow irrigation (F)) and fertilization regimes (conventional fertilization (C) and conventional fertilization + biochar (B)) during the cotton growth season. The accumulated N2O emissions were significantly lower with FB, DC, and DB than with FC by 28.8%, 36.1%, and 37.6%, while accumulated CH4 uptake was 264.5%, 226.7%, and 154.2% higher with DC, DB, and FC than that with FB, respectively. Irrigation methods showed a significant effect on total global warming potential (GWP) and yield-scaled GWP (P < 0.01). DC and DB showed higher cotton yield, water use efficiency (WUE), and lower yield-scaled GWP, as compared with FC and FB. This suggests that in northwestern China mulched-drip irrigation should be a better approach to increase cotton yield with depressed GHG. In addition, biochar addition increased CH4 emissions while it decreased N2O emissions. PMID:25133229
Tau-U: A Quantitative Approach for Analysis of Single-Case Experimental Data in Aphasia.
Lee, Jaime B; Cherney, Leora R
2018-03-01
Tau-U is a quantitative approach for analyzing single-case experimental design (SCED) data. It combines nonoverlap between phases with intervention phase trend and can correct for a baseline trend (Parker, Vannest, & Davis, 2011). We demonstrate the utility of Tau-U by comparing it with the standardized mean difference approach (Busk & Serlin, 1992) that is widely reported within the aphasia SCED literature. Repeated writing measures from 3 participants with chronic aphasia who received computer-based writing treatment are analyzed visually and quantitatively using both Tau-U and the standardized mean difference approach. Visual analysis alone was insufficient for determining an effect between the intervention and writing improvement. The standardized mean difference yielded effect sizes ranging from 4.18 to 26.72 for trained items and 1.25 to 3.20 for untrained items. Tau-U yielded significant (p < .05) effect sizes for 2 of 3 participants for trained probes and 1 of 3 participants for untrained probes. A baseline trend correction was applied to data from 2 of 3 participants. Tau-U has the unique advantage of allowing for the correction of an undesirable baseline trend. Although further study is needed, Tau-U shows promise as a quantitative approach to augment visual analysis of SCED data in aphasia.
Uncertainty in Modeling Dust Mass Balance and Radiative Forcing from Size Parameterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Chun; Chen, Siyu; Leung, Lai-Yung R.
2013-11-05
This study examines the uncertainties in simulating mass balance and radiative forcing of mineral dust due to biases in the aerosol size parameterization. Simulations are conducted quasi-globally (180oW-180oE and 60oS-70oN) using the WRF24 Chem model with three different approaches to represent aerosol size distribution (8-bin, 4-bin, and 3-mode). The biases in the 3-mode or 4-bin approaches against a relatively more accurate 8-bin approach in simulating dust mass balance and radiative forcing are identified. Compared to the 8-bin approach, the 4-bin approach simulates similar but coarser size distributions of dust particles in the atmosphere, while the 3-mode pproach retains more finemore » dust particles but fewer coarse dust particles due to its prescribed og of each mode. Although the 3-mode approach yields up to 10 days longer dust mass lifetime over the remote oceanic regions than the 8-bin approach, the three size approaches produce similar dust mass lifetime (3.2 days to 3.5 days) on quasi-global average, reflecting that the global dust mass lifetime is mainly determined by the dust mass lifetime near the dust source regions. With the same global dust emission (~6000 Tg yr-1), the 8-bin approach produces a dust mass loading of 39 Tg, while the 4-bin and 3-mode approaches produce 3% (40.2 Tg) and 25% (49.1 Tg) higher dust mass loading, respectively. The difference in dust mass loading between the 8-bin approach and the 4-bin or 3-mode approaches has large spatial variations, with generally smaller relative difference (<10%) near the surface over the dust source regions. The three size approaches also result in significantly different dry and wet deposition fluxes and number concentrations of dust. The difference in dust aerosol optical depth (AOD) (a factor of 3) among the three size approaches is much larger than their difference (25%) in dust mass loading. Compared to the 8-bin approach, the 4-bin approach yields stronger dust absorptivity, while the 3-mode approach yields weaker dust absorptivity. Overall, on quasi-global average, the three size parameterizations result in a significant difference of a factor of 2~3 in dust surface cooling (-1.02~-2.87 W m-2) and atmospheric warming (0.39~0.96 W m-2) and in a tremendous difference of a factor of ~10 in dust TOA cooling (-0.24~-2.20 W m-2). An uncertainty of a factor of 2 is quantified in dust emission estimation due to the different size parameterizations. This study also highlights the uncertainties in modeling dust mass and number loading, deposition fluxes, and radiative forcing resulting from different size parameterizations, and motivates further investigation of the impact of size parameterizations on modeling dust impacts on air quality, climate, and ecosystem.« less
Afriat-Jurnou, Livnat; Cohen, Rami; Paluy, Irina; Ben-Adiva, Ran; Yadid, Itamar
2018-02-01
Inulinases are fructofuranosyl hydrolases that target the β-2,1 linkage of inulin and hydrolyze it into fructose, glucose and inulooligosaccharides (IOS), the latter are of growing interest as dietary fibers. Inulinases from various microorganisms have been purified, characterized and produced for industrial applications. However, there remains a need for inulinases with increased catalytic activity and better production yields to improve the hydrolysis process and fulfill the growing industrial demands for specific fibers. In this study, we used directed enzyme evolution to increase the yield and activity of an endoinulinase enzyme originated from the filamentous fungus Talaromyces purpureogenus (Penicillium purpureogenum ATCC4713). Our directed evolution approach yielded variants showing up to fivefold improvements in soluble enzyme production compared to the starting point which enabled high-yield production of highly purified recombinant enzyme. The distribution of the enzymatic reaction products demonstrated that after 24 h of incubation, the main product (57%) had a degree of polymerization of 3 (DP3). To the best of our knowledge, this is the first application of directed enzyme evolution to improve inulooligosaccharide production. The approach enabled the screening of large genetic libraries within short time frames and facilitated screening for improved enzymatic activities and properties, such as substrate specificity, product range, thermostability and pH optimum. © 2018 American Institute of Chemical Engineers Biotechnol. Prog., 2018. © 2018 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Al Samarai, Imen; Deligny, Olivier; Rosado, Jaime
2016-10-01
A small contribution of molecular Bremsstrahlung radiation to the air-fluorescence yield in the UV range is estimated based on an approach previously developed in the framework of the radio-detection of showers in the gigahertz frequency range. First, this approach is shown to provide an estimate of the main contribution of the fluorescence yield due to the de-excitation of the C 3Πu electronic level of nitrogen molecules to the B 3Πg one amounting to Y[ 337 ] =(6.05 ± 1.50) MeV-1 at 800 hPa pressure and 293 K temperature conditions, which compares well to previous dedicated works and to experimental results. Then, under the same pressure and temperature conditions, the fluorescence yield induced by molecular Bremsstrahlung radiation is found to be Y[330-400]MBR = 0.10 MeV-1 in the wavelength range of interest for the air-fluorescence detectors used to detect extensive air showers induced in the atmosphere by ultra-high energy cosmic rays. This means that out of ≃175 photons with wavelength between 330 and 400 nm detected by fluorescence detectors, one of them has been produced by molecular Bremsstrahlung radiation. Although small, this contribution is not negligible in regards to the total budget of systematic uncertainties when considering the absolute energy scale of fluorescence detectors.
Meta-analysis of climate impacts and uncertainty on crop yields in Europe
NASA Astrophysics Data System (ADS)
Knox, Jerry; Daccache, Andre; Hess, Tim; Haro, David
2016-11-01
Future changes in temperature, rainfall and soil moisture could threaten agricultural land use and crop productivity in Europe, with major consequences for food security. We assessed the projected impacts of climate change on the yield of seven major crop types (viz wheat, barley, maize, potato, sugar beet, rice and rye) grown in Europe using a systematic review (SR) and meta-analysis of data reported in 41 original publications from an initial screening of 1748 studies. Our approach adopted an established SR procedure developed by the Centre for Evidence Based Conservation constrained by inclusion criteria and defined methods for literature searches, data extraction, meta-analysis and synthesis. Whilst similar studies exist to assess climate impacts on crop yield in Africa and South Asia, surprisingly, no comparable synthesis has been undertaken for Europe. Based on the reported results (n = 729) we show that the projected change in average yield in Europe for the seven crops by the 2050s is +8%. For wheat and sugar beet, average yield changes of +14% and +15% are projected, respectively. There were strong regional differences with crop impacts in northern Europe being higher (+14%) and more variable compared to central (+6%) and southern (+5) Europe. Maize is projected to suffer the largest negative mean change in southern Europe (-11%). Evidence of climate impacts on yield was extensive for wheat, maize, sugar beet and potato, but very limited for barley, rice and rye. The implications for supporting climate adaptation policy and informing climate impacts crop science research in Europe are discussed.
Using LANDSAT to provide potato production estimates to Columbia Basin farmers and processors
NASA Technical Reports Server (NTRS)
1991-01-01
The estimation of potato yields in the Columbia basin is described. The fundamental objective is to provide CROPIX with working models of potato production. A two-pronged approach was used to yield estimation: (1) using simulation models, and (2) using purely empirical models. The simulation modeling approach used satellite observations to determine certain key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Purely empirical models were developed to relate yield to some spectrally derived measure of crop development. Two empirical approaches are presented: one relates tuber yield to estimates of cumulative intercepted solar radiation, the other relates tuber yield to the integral under GVI (Global Vegetation Index) curve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnum-Johnson, Kristin E.; Nie, Song; Casey, Cameron P.
Current proteomics approaches are comprised of both broad discovery measurements as well as more quantitative targeted measurements. These two different measurement types are used to initially identify potentially important proteins (e.g., candidate biomarkers) and then enable improved quantification for a limited number of selected proteins. However, both approaches suffer from limitations, particularly the lower sensitivity, accuracy, and quantitation precision for discovery approaches compared to targeted approaches, and the limited proteome coverage provided by targeted approaches. Herein, we describe a new proteomics approach that allows both discovery and targeted monitoring (DTM) in a single analysis using liquid chromatography, ion mobility spectrometrymore » and mass spectrometry (LC-IMS-MS). In DTM, heavy labeled peptides for target ions are spiked into tryptic digests and both the labeled and unlabeled peptides are broadly detected using LC-IMS-MS instrumentation, allowing the benefits of discovery and targeted approaches. To understand the possible improvement of the DTM approach, it was compared to LC-MS broad measurements using an accurate mass and time tag database and selected reaction monitoring (SRM) targeted measurements. The DTM results yielded greater peptide/protein coverage and a significant improvement in the detection of lower abundance species compared to LC-MS discovery measurements. DTM was also observed to have similar detection limits as SRM for the targeted measurements indicating its potential for combining the discovery and targeted approaches.« less
Whole-Genome Characterization of Prunus necrotic ringspot virus Infecting Sweet Cherry in China
2018-01-01
ABSTRACT Prunus necrotic ringspot virus (PNRSV) causes yield loss in most cultivated stone fruits, including sweet cherry. Using a small RNA deep-sequencing approach combined with end-genome sequence cloning, we identified the complete genomes of all three PNRSV strands from PNRSV-infected sweet cherry trees and compared them with those of two previously reported isolates. PMID:29496825
Cortez, Michael H; Ellner, Stephen P
2010-11-01
The accumulation of evidence that ecologically important traits often evolve at the same time and rate as ecological dynamics (e.g., changes in species' abundances or spatial distributions) has outpaced theory describing the interplay between ecological and evolutionary processes with comparable timescales. The disparity between experiment and theory is partially due to the high dimensionality of models that include both evolutionary and ecological dynamics. Here we show how the theory of fast-slow dynamical systems can be used to reduce model dimension, and we use that body of theory to study a general predator-prey system exhibiting fast evolution in either the predator or the prey. Our approach yields graphical methods with predictive power about when new and unique dynamics (e.g., completely out-of-phase oscillations and cryptic dynamics) can arise in ecological systems exhibiting fast evolution. In addition, we derive analytical expressions for determining when such behavior arises and how evolution affects qualitative properties of the ecological dynamics. Finally, while the theory requires a separation of timescales between the ecological and evolutionary processes, our approach yields insight into systems where the rates of those processes are comparable and thus is a step toward creating a general ecoevolutionary theory.
Van Norman, Ethan R; Nelson, Peter M; Klingbeil, David A
2017-09-01
Educators need recommendations to improve screening practices without limiting students' instructional opportunities. Repurposing previous years' state test scores has shown promise in identifying at-risk students within multitiered systems of support. However, researchers have not directly compared the diagnostic accuracy of previous years' state test scores with data collected during fall screening periods to identify at-risk students. In addition, the benefit of using previous state test scores in conjunction with data from a separate measure to identify at-risk students has not been explored. The diagnostic accuracy of 3 types of screening approaches were tested to predict proficiency on end-of-year high-stakes assessments: state test data obtained during the previous year, data from a different measure administered in the fall, and both measures combined (i.e., a gated model). Extant reading and math data (N = 2,996) from 10 schools in the Midwest were analyzed. When used alone, both measures yielded similar sensitivity and specificity values. The gated model yielded superior specificity values compared with using either measure alone, at the expense of sensitivity. Implications, limitations, and ideas for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Koh, Vicky Y; Buhari, Shaik A; Tan, Poh Wee; Tan, Yun Inn; Leong, Yuh Fun; Earnest, Arul; Tang, Johann I
2014-06-01
Currently, there are two described methods of catheter insertion for women undergoing multicatheter interstitial accelerated partial breast irradiation (APBI). These are a volume based template approach (template) and a non-template ultrasound guidance freehand approach (non-template). We aim to compare dosimetric endpoints between the template and non-template approach. Twenty patients, who received adjuvant multicatheter interstitial APBI between August 2008 to March 2010 formed the study cohort. Dosimetric planning was based on the RTOG 04-13 protocol. For standardization, the planning target volume evaluation (PTV-Eval) and organs at risk were contoured with the assistance of the attending surgeon. Dosimetric endpoints include D90 of the PTV-Eval, Dose Homogeneity Index (DHI), V200, maximum skin dose (MSD), and maximum chest wall dose (MCD). A median of 18 catheters was used per patient. The dose prescribed was 34 Gy in 10 fractions BID over 5 days. The average breast volume was 846 cm(3) (526-1384) for the entire cohort and there was no difference between the two groups (p = 0.6). Insertion time was significantly longer for the non-template approach (mean 150 minutes) compared to the template approach (mean: 90 minutes) (p = 0.02). The planning time was also significantly longer for the non-template approach (mean: 240 minutes) compared to the template approach (mean: 150 minutes) (p < 0.01). The template approach yielded a higher D90 (mean: 95%) compared to the non-template approach (mean: 92%) (p < 0.01). There were no differences in DHI (p = 0.14), V200 (p = 0.21), MSD (p = 0.7), and MCD (p = 0.8). Compared to the non-template approach, the template approach offered significant shorter insertion and planning times with significantly improved dosimetric PTV-Eval coverage without significantly compromising organs at risk dosimetrically.
Pervin, Lia; Islam, Md Saiful
2015-02-01
The aim of this study was to develop a system dynamics model for computation of yields and to investigate the dependency of yields on some major climatic parameters, i.e. temperature and rainfall, for Beta vulgaris subsp. (sugar beet crops) under future climate change scenarios. A system dynamics model was developed which takes account of the effects of rainfall and temperature on sugar beet yields under limited irrigation conditions. A relationship was also developed between the seasonal evapotranspiration and seasonal growing degree days for sugar beet crops. The proposed model was set to run for the present time period of 1993-2012 and for the future period 2013-2040 for Lethbridge region (Alberta, Canada). The model provides sugar beet yields on a yearly basis which are comparable to the present field data. It was found that the future average yield will be increased at about 14% with respect to the present average yield. The proposed model can help to improve the understanding of soil water conditions and irrigation water requirements of an area under certain climatic conditions and can be used for future prediction of yields for any crops in any region (with the required information to be provided). The developed system dynamics model can be used as a supporting tool for decision making, for improvement of agricultural management practice of any region. © 2014 Society of Chemical Industry.
Fast cleavage of phycocyanobilin from phycocyanin for use in food colouring.
Roda-Serrat, Maria Cinta; Christensen, Knud Villy; El-Houri, Rime Bahij; Fretté, Xavier; Christensen, Lars Porskjær
2018-02-01
Phycocyanins from cyanobacteria are possible sources for new natural blue colourants. Their chromophore, phycocyanobilin (PCB), was cleaved from the apoprotein by solvolysis in alcohols and alcoholic aqueous solutions. In all cases two PCB isomers were obtained, while different solvent adducts were formed upon the use of different reagents. The reaction is believed to take place via two competing pathways, a concerted E2 elimination and a S N 2 nucleophilic substitution. Three cleavage methods were compared in terms of yield and purity: conventional reflux, sealed vessel heated in an oil bath, and microwave assisted reaction. The sealed vessel method is a new approach for fast cleavage of PCB from phycocyanin and gave at 120°C the same yield within 30min compared to 16h by the conventional reflux method (P<0.05). In addition the sealed vessel method resulted in improved purity compared to the other methods. Microwave irradiation increased product degradation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Assimilation efficiency for sediment-sorbed benzo(a)pyrene by Diporeia spp.
Lydy, M.J.; Landrum, P.F.
1993-01-01
Two methods are currently available for determining contaminant assimilation efficiencies (AE) from ingested material in benthic invertebrates. These methods were compared using the Great Lakes amphipod Diporeia spp. and [14C]benzo(a)pyrene (BaP) sorbed to Florissant sediment (< 63 µm). The first approach, the direct measurement method, uses total organic carbon as a tracer and yielded AE values ranging from 45.9~50.4%. The second approach, the dual-labeled method, uses 51Cr as a non-assimilated tracer and did not yield AE values for our data. The inability of the dual-labeled approach to estimate AEs was due, in part, to the selective feeding by Diporeia resulting in a failure of the non-assimilated tracer (51Cr) to track with the assimilated tracer ([14C]BaP). The failure of the dual-labeled approach was not a result of an uneven distribution of the labels among particle size classes, but more likely resulted from differential sorption of the two isotopically labeled materials to particles of differing composition. The [14C]BaP apparently sorbs to organic particles that are selectively ingested, while the 51Cr apparently sorbs to particles which are selectively excluded by Diporeia. The dual-labeled approach would be a viable and easier experimental approach for determining AE values if the characteristics that govern selective feeding can be determined.
Wu, Weihua; Tran-Gyamfi, Mary Bao; Jaryenneh, James Dekontee; ...
2016-08-24
Recently the feasibility of conversion of algal protein to mixed alcohols has been demonstrated with an engineered E.coli strain, enabling comprehensive utilization of the biomass for biofuel applications. However, the yield and titers of mixed alcohol production must be improved for market adoption. A major limiting factor for achieving the necessary yield and titer improvements is cofactor imbalance during the fermentation of algal protein. To resolve this problem, a directed evolution approach was applied to modify the cofactor specificity of two key enzymes (IlvC and YqhD) from NADPH to NADH in the mixed alcohol metabolic pathway. Using high throughput screening,more » more than 20 YqhD mutants were identified to show activity on NADH as a cofactor. Of these 20 mutants, the top five of YqhD mutants were selected for combination with two IlvC mutants with NADH as a cofactor for the modification of the protein conversion strain. The combination of the IlvC and YqhD mutants yielded a refined E.coli strain, subtype AY3, with increased fusel alcohol yield of ~60% compared to wild type under anaerobic fermentation on amino acid mixtures. When applied to real algal protein hydrolysates, the strain AY3 produced 100% and 38% more total mixed alcohols than the wild type strain on two different algal hydrolysates, respectively. The results indicate that cofactor engineering is a promising approach to improve the feasibility of bioconversion of algal protein into mixed alcohols as advanced biofuels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Weihua; Tran-Gyamfi, Mary Bao; Jaryenneh, James Dekontee
Recently the feasibility of conversion of algal protein to mixed alcohols has been demonstrated with an engineered E.coli strain, enabling comprehensive utilization of the biomass for biofuel applications. However, the yield and titers of mixed alcohol production must be improved for market adoption. A major limiting factor for achieving the necessary yield and titer improvements is cofactor imbalance during the fermentation of algal protein. To resolve this problem, a directed evolution approach was applied to modify the cofactor specificity of two key enzymes (IlvC and YqhD) from NADPH to NADH in the mixed alcohol metabolic pathway. Using high throughput screening,more » more than 20 YqhD mutants were identified to show activity on NADH as a cofactor. Of these 20 mutants, the top five of YqhD mutants were selected for combination with two IlvC mutants with NADH as a cofactor for the modification of the protein conversion strain. The combination of the IlvC and YqhD mutants yielded a refined E.coli strain, subtype AY3, with increased fusel alcohol yield of ~60% compared to wild type under anaerobic fermentation on amino acid mixtures. When applied to real algal protein hydrolysates, the strain AY3 produced 100% and 38% more total mixed alcohols than the wild type strain on two different algal hydrolysates, respectively. The results indicate that cofactor engineering is a promising approach to improve the feasibility of bioconversion of algal protein into mixed alcohols as advanced biofuels.« less
Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.
Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias
2016-12-01
Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.
NASA Technical Reports Server (NTRS)
Mladenova, Iliana E.; Bolten, John D.; Crow, Wade T.; Anderson, Martha C.; Hain, C. R.; Johnson, David M.; Mueller, Rick
2017-01-01
This paper presents an intercomparative study of 12 operationally produced large-scale datasets describing soil moisture, evapotranspiration (ET), and or vegetation characteristics within agricultural regions of the contiguous United States (CONUS). These datasets have been developed using a variety of techniques, including, hydrologic modeling, satellite-based retrievals, data assimilation, and survey in-field data collection. The objectives are to assess the relative utility of each dataset for monitoring crop yield variability, to quantitatively assess their capacity for predicting end-of-season corn and soybean yields, and to examine the evolution of the yield-index correlations during the growing season. This analysis is unique both with regards to the number and variety of examined yield predictor datasets and the detailed assessment of the water availability timing on the end-of-season crop production during the growing season. Correlation results indicate that over CONUS, at state-level soil moisture and ET indices can provide better information for forecasting corn and soybean yields than vegetation-based indices such as normalized difference vegetation index. The strength of correlation with corn and soybean yields strongly depends on the interannual variability in yield measured at a given location. In this case study, some of the remotely derived datasets examined provide skill comparable to that of in situ field survey-based data further demonstrating the utility of these remote sensing-based approaches for estimating crop yield.
Arnould, Valérie M. R.; Reding, Romain; Bormann, Jeanne; Gengler, Nicolas; Soyeurt, Hélène
2015-01-01
Simple Summary Reducing the frequency of milk recording decreases the costs of official milk recording. However, this approach can negatively affect the accuracy of predicting daily yields. Equations to predict daily yield from morning or evening data were developed in this study for fatty milk components from traits recorded easily by milk recording organizations. The correlation values ranged from 96.4% to 97.6% (96.9% to 98.3%) when the daily yields were estimated from the morning (evening) milkings. The simplicity of the proposed models which do not include the milking interval should facilitate their use by breeding and milk recording organizations. Abstract Reducing the frequency of milk recording would help reduce the costs of official milk recording. However, this approach could also negatively affect the accuracy of predicting daily yields. This problem has been investigated in numerous studies. In addition, published equations take into account milking intervals (MI), and these are often not available and/or are unreliable in practice. The first objective of this study was to propose models in which the MI was replaced by a combination of data easily recorded by dairy farmers. The second objective was to further investigate the fatty acids (FA) present in milk. Equations to predict daily yield from AM or PM data were based on a calibration database containing 79,971 records related to 51 traits [milk yield (expected AM, expected PM, and expected daily); fat content (expected AM, expected PM, and expected daily); fat yield (expected AM, expected PM, and expected daily; g/day); levels of seven different FAs or FA groups (expected AM, expected PM, and expected daily; g/dL milk), and the corresponding FA yields for these seven FA types/groups (expected AM, expected PM, and expected daily; g/day)]. These equations were validated using two distinct external datasets. The results obtained from the proposed models were compared to previously published results for models which included a MI effect. The corresponding correlation values ranged from 96.4% to 97.6% when the daily yields were estimated from the AM milkings and ranged from 96.9% to 98.3% when the daily yields were estimated from the PM milkings. The simplicity of these proposed models should facilitate their use by breeding and milk recording organizations. PMID:26479379
Bayesian Estimation of Small Effects in Exercise and Sports Science.
Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J
2016-01-01
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Closing Yield Gaps: How Sustainable Can We Be?
Pradhan, Prajal; Fischer, Günther; van Velthuizen, Harrij; Reusser, Dominik E; Kropp, Juergen P
2015-01-01
Global food production needs to be increased by 60-110% between 2005 and 2050 to meet growing food and feed demand. Intensification and/or expansion of agriculture are the two main options available to meet the growing crop demands. Land conversion to expand cultivated land increases GHG emissions and impacts biodiversity and ecosystem services. Closing yield gaps to attain potential yields may be a viable option to increase the global crop production. Traditional methods of agricultural intensification often have negative externalities. Therefore, there is a need to explore location-specific methods of sustainable agricultural intensification. We identified regions where the achievement of potential crop calorie production on currently cultivated land will meet the present and future food demand based on scenario analyses considering population growth and changes in dietary habits. By closing yield gaps in the current irrigated and rain-fed cultivated land, about 24% and 80% more crop calories can respectively be produced compared to 2000. Most countries will reach food self-sufficiency or improve their current food self-sufficiency levels if potential crop production levels are achieved. As a novel approach, we defined specific input and agricultural management strategies required to achieve the potential production by overcoming biophysical and socioeconomic constraints causing yield gaps. The management strategies include: fertilizers, pesticides, advanced soil management, land improvement, management strategies coping with weather induced yield variability, and improving market accessibility. Finally, we estimated the required fertilizers (N, P2O5, and K2O) to attain the potential yields. Globally, N-fertilizer application needs to increase by 45-73%, P2O5-fertilizer by 22-46%, and K2O-fertilizer by 2-3 times compared to the year 2010 to attain potential crop production. The sustainability of such agricultural intensification largely depends on the way management strategies for closing yield gaps are chosen and implemented.
Closing Yield Gaps: How Sustainable Can We Be?
Pradhan, Prajal; Fischer, Günther; van Velthuizen, Harrij; Reusser, Dominik E.; Kropp, Juergen P.
2015-01-01
Global food production needs to be increased by 60–110% between 2005 and 2050 to meet growing food and feed demand. Intensification and/or expansion of agriculture are the two main options available to meet the growing crop demands. Land conversion to expand cultivated land increases GHG emissions and impacts biodiversity and ecosystem services. Closing yield gaps to attain potential yields may be a viable option to increase the global crop production. Traditional methods of agricultural intensification often have negative externalities. Therefore, there is a need to explore location-specific methods of sustainable agricultural intensification. We identified regions where the achievement of potential crop calorie production on currently cultivated land will meet the present and future food demand based on scenario analyses considering population growth and changes in dietary habits. By closing yield gaps in the current irrigated and rain-fed cultivated land, about 24% and 80% more crop calories can respectively be produced compared to 2000. Most countries will reach food self-sufficiency or improve their current food self-sufficiency levels if potential crop production levels are achieved. As a novel approach, we defined specific input and agricultural management strategies required to achieve the potential production by overcoming biophysical and socioeconomic constraints causing yield gaps. The management strategies include: fertilizers, pesticides, advanced soil management, land improvement, management strategies coping with weather induced yield variability, and improving market accessibility. Finally, we estimated the required fertilizers (N, P2O5, and K2O) to attain the potential yields. Globally, N-fertilizer application needs to increase by 45–73%, P2O5-fertilizer by 22–46%, and K2O-fertilizer by 2–3 times compared to the year 2010 to attain potential crop production. The sustainability of such agricultural intensification largely depends on the way management strategies for closing yield gaps are chosen and implemented. PMID:26083456
Harper, Kathryn A; Meiklejohn, Kelly A; Merritt, Richard T; Walker, Jessica; Fisher, Constance L; Robertson, James M
2018-02-01
Hairs are commonly submitted as evidence to forensic laboratories, but standard nuclear DNA analysis is not always possible. Mitochondria (mt) provide another source of genetic material; however, manual isolation is laborious. In a proof-of-concept study, we assessed pressure cycling technology (PCT; an automated approach that subjects samples to varying cycles of high and low pressure) for extracting mtDNA from single, short hairs without roots. Using three microscopically similar donors, we determined the ideal PCT conditions and compared those yields to those obtained using the traditional manual micro-tissue grinder method. Higher yields were recovered from grinder extracts, but yields from PCT extracts exceeded the requirements for forensic analysis, with the DNA quality confirmed through sequencing. Automated extraction of mtDNA from hairs without roots using PCT could be useful for forensic laboratories processing numerous samples.
Deep Drawing Simulations With Different Polycrystalline Models
NASA Astrophysics Data System (ADS)
Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie
2004-06-01
The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.
Crop weather models of barley and spring wheat yield for agrophysical units in North Dakota
NASA Technical Reports Server (NTRS)
Leduc, S. (Principal Investigator)
1982-01-01
Models based on multiple regression were developed to estimate barley yield and spring wheat yield from weather data for Agrophysical units(APU) in North Dakota. The predictor variables are derived from monthly average temperature and monthly total precipitation data at meteorological stations in the cooperative network. The models are similar in form to the previous models developed for Crop Reporting Districts (CRD). The trends and derived variables were the same and the approach to select the significant predictors was similar to that used in developing the CRD models. The APU models show sight improvements in some of the statistics of the models, e.g., explained variation. These models are to be independently evaluated and compared to the previously evaluated CRD models. The comparison will indicate the preferred model area for this application, i.e., APU or CRD.
NASA Astrophysics Data System (ADS)
Chahbi, Aicha; Zribi, Mehrez; Lili-Chabaane, Zohra; Mougenot, Bernard
2015-10-01
In semi-arid areas, an operational grain yield forecasting system, which could help decision-makers to plan annual imports, is needed. It can be challenging to monitor the crop canopy and production capacity of plants, especially cereals. Many models, based on the use of remote sensing or agro-meteorological models, have been developed to estimate the biomass and grain yield of cereals. Remote sensing has demonstrated its strong potential for the monitoring of the vegetation's dynamics and temporal variations. Through the use of a rich database, acquired over a period of two years for more than 60 test fields, and from 20 optical satellite SPOT/HRV images, the aim of the present study is to evaluate the feasibility of two approaches to estimate the dynamics and yields of cereals in the context of semi-arid, low productivity regions in North Africa. The first approach is based on the application of the semi-empirical growth model SAFY "Simple Algorithm For Yield estimation", developed to simulate the dynamics of the leaf area index and the grain yield, at the field scale. The model is able to reproduce the time evolution of the LAI of all fields. However, the yields are under-estimated. Therefore, we developed a new approach to improve the SAFY model. The grain yield is function of LAI area in the growth period between 25 March and 5 April. This approach is robust, the measured and estimated grain yield are well correlated. Finally, this model is used in combination with remotely sensed LAI measurements to estimate yield for the entire studied site.
Internal and external axial corner flows
NASA Technical Reports Server (NTRS)
Kutler, P.; Shankar, V.; Anderson, D. A.; Sorenson, R. L.
1975-01-01
The inviscid, internal, and external axial corner flows generated by two intersecting wedges traveling supersonically are obtained by use of a second-order shock-capturing, finite-difference approach. The governing equations are solved iteratively in conical coordinates to yield the complicated wave structure of the internal corner and the simple peripheral shock of the external corner. The numerical results for the internal flows compare favorably with existing experimental data.
Cheruvallath, Zacharia S; Kumar, R Krishna; Rentel, Claus; Cole, Douglas L; Ravikumar, Vasulinga T
2003-04-01
Diethyldithiodicarbonate (DDD), a cheap and easily prepared compound, is found to be a rapid and efficient sulfurizing reagent in solid phase synthesis of phosphorothioate oligodeoxyribonucleotides via the phosphoramidite approach. Product yield and quality based on IP-LC-MS compares well with high quality oligonucleotides synthesized using phenylacetyl disulfide (PADS) which is being used for manufacture of our antisense drugs.
Robert M. Frank; Barton M. Blum
1978-01-01
Early results after 20 years of record keeping indicate that spruce-fir stands will respond to the selection system of silviculture. Stand quality is improved, species composition can be altered, diameter-class distribution approaches a stated goal, stand density is controlled, and yields are increased. Selection silviculture in spruce-fir can now be compared to early...
Bianca N. I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2008-01-01
Many growth and yield simulators require a stand table or tree-list to set the initial condition for projections in time. Most similar neighbour (MSN) approaches can be used for estimating stand tables from information commonly available on forest cover maps (e.g. height, volume, canopy cover, and species composition). Simulations were used to compare MSN (using an...
Whole-Genome Characterization of Prunus necrotic ringspot virus Infecting Sweet Cherry in China.
Wang, Jiawei; Zhai, Ying; Zhu, Dongzi; Liu, Weizhen; Pappu, Hanu R; Liu, Qingzhong
2018-03-01
Prunus necrotic ringspot virus (PNRSV) causes yield loss in most cultivated stone fruits, including sweet cherry. Using a small RNA deep-sequencing approach combined with end-genome sequence cloning, we identified the complete genomes of all three PNRSV strands from PNRSV-infected sweet cherry trees and compared them with those of two previously reported isolates. Copyright © 2018 Wang et al.
{lambda}{sub b}{yields}p, {lambda} transition form factors from QCD light-cone sum rules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yuming; Lue Caidian; Shen Yuelong
2009-10-01
Light-cone sum rules for the {lambda}{sub b}{yields}p, {lambda} transition form factors are derived from the correlation functions expanded by the twist of the distribution amplitudes of the {lambda}{sub b} baryon. In terms of the {lambda}{sub b} three-quark distribution amplitude models constrained by the QCD theory, we calculate the form factors at small momentum transfers and compare the results with those estimated in the conventional light-cone sum rules (LCSR) and perturbative QCD approaches. Our results indicate that the two different versions of sum rules can lead to the consistent numbers of form factors responsible for {lambda}{sub b}{yields}p transition. The {lambda}{sub b}{yields}{lambda}more » transition form factors from LCSR with the asymptotic {lambda} baryon distribution amplitudes are found to be almost 1 order larger than those obtained in the {lambda}{sub b}-baryon LCSR, implying that the preasymptotic corrections to the baryonic distribution amplitudes are of great importance. Moreover, the SU(3) symmetry breaking effects between the form factors f{sub 1}{sup {lambda}{sub b}}{sup {yields}}{sup p} and f{sub 1}{sup {lambda}{sub b}}{sup {yields}}{sup {lambda}} are computed as 28{sub -8}{sup +14}% in the framework of {lambda}{sub b}-baryon LCSR.« less
Biogas and methane yield in response to co- and separate digestion of biomass wastes.
Adelard, Laetitia; Poulsen, Tjalfe G; Rakotoniaina, Volana
2015-01-01
The impact of co-digestion as opposed to separate digestion, on biogas and methane yield (apparent synergetic effects) was investigated for three biomass materials (pig manure, cow manure and food waste) under mesophilic conditions over a 36 day period. In addition to the three biomass materials (digested separately), 13 biomass mixtures (co-digested) were used. Two approaches for modelling biogas and methane yield during co-digestion, based on volatile solids concentration and ultimate gas and methane potentials, were evaluated. The dependency of apparent synergetic effects on digestion time and biomass mixture composition was further assessed using measured cumulative biogas and methane yields and specific biogas and methane generation rates. Results indicated that it is possible, based on known volatile solids concentration and ultimate biogas or methane yields for a set of biomass materials digested separately, to accurately estimate gas yields for biomass mixtures made from these materials using calibrated models. For the biomass materials considered here, modelling indicated that the addition of pig manure is the main cause of synergetic effects. Co-digestion generally resulted in improved ultimate biogas and methane yields compared to separate digestion. Biogas and methane production was furthermore significantly higher early (0-7 days) and to some degree also late (above 20 days) in the digestion process during co-digestion. © The Author(s) 2014.
Midtvedt, Daniel; Croy, Alexander
2016-06-10
We compare the simplified valence-force model for single-layer black phosphorus with the original model and recent ab initio results. Using an analytic approach and numerical calculations we find that the simplified model yields Young's moduli that are smaller compared to the original model and are almost a factor of two smaller than ab initio results. Moreover, the Poisson ratios are an order of magnitude smaller than values found in the literature.
Design and Synthesis of Novel Arylketo-containing P1-P3 Linked Macro-cyclic BACE-1 Inhibitors
Sandgren, Veronica; Belda, Oscar; Kvarnström, Ingemar; Lindberg, Jimmy; Samuelsson, Bertil; Dahlgren, Anders
2015-01-01
A series of arylketo-containing P1-P3 linked macrocyclic BACE-1 inhibitors were designed, synthesized, and compared with compounds with a previously known and extensively studied corresponding P2 isophthalamide moiety with the aim to improve on permeability whilst retaining the enzyme- and cell-based activities. Several inhibitors displayed substantial increases in Caco-2 cell-based permeability compared to earlier synthesized inhibitors and notably also with retained activities, showing that this approach might yield BACE-1 inhibitors with improved properties. PMID:25937848
A Comparison of Machine Learning Approaches for Corn Yield Estimation
NASA Astrophysics Data System (ADS)
Kim, N.; Lee, Y. W.
2017-12-01
Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.
Tracing the evolutionary path to nitrogen-fixing crops.
Delaux, Pierre-Marc; Radhakrishnan, Guru; Oldroyd, Giles
2015-08-01
Nitrogen-fixing symbioses between plants and bacteria are restricted to a few plant lineages. The plant partner benefits from these associations by gaining access to the pool of atmospheric nitrogen. By contrast, other plant species, including all cereals, rely only on the scarce nitrogen present in the soil and what they can glean from associative bacteria. Global cereal yields from conventional agriculture are dependent on the application of massive levels of chemical fertilisers. Engineering nitrogen-fixing symbioses into cereal crops could in part mitigate the economic and ecological impacts caused by the overuse of fertilisers and provide better global parity in crop yields. Comparative phylogenetics and phylogenomics are powerful tools to identify genetic and genomic innovations behind key plant traits. In this review we highlight recent discoveries made using such approaches and we discuss how these approaches could be used to help direct the engineering of nitrogen-fixing symbioses into cereals. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Li, Ning; Wang, Hengwei; Li, Lijuan; Cheng, Huiling; Liu, Dawen; Cheng, Hairong; Deng, Zixin
2016-08-10
An alternative strategy that integrated enzyme production, trehalose biotransformation, and bioremoval in one bioreactor was developed in this study, thus simplifying the traditional procedures used for trehalose production. The trehalose synthase gene from a thermophilic archaea, Picrophilus torridus, was first fused to the YlPir1 anchor gene and then inserted into the genome of Yarrowia lipolytica, thus yielding an engineered yeast strain. The trehalose yield reached 73% under optimal conditions. The thermal and pH stabilities of the displayed enzyme were improved compared to those of its free form purified from recombinant Escherichia coli. After biotransformation, the glucose byproduct and residual maltose were directly fermented to ethanol by a Saccharomyces cerevisiae strain. Ethanol can be separated by distillation, and high-purity trehalose can easily be obtained from the fermentation broth. The results show that this one-pot procedure is an efficient approach to the economical production of trehalose from maltose.
Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...
2016-06-23
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
Systematic engineering of pentose phosphate pathway improves Escherichia coli succinate production.
Tan, Zaigao; Chen, Jing; Zhang, Xueli
2016-01-01
Succinate biosynthesis of Escherichia coli is reducing equivalent-dependent and the EMP pathway serves as the primary reducing equivalent source under anaerobic condition. Compared with EMP, pentose phosphate pathway (PPP) is reducing equivalent-conserving but suffers from low efficacy. In this study, the ribosome binding site library and modified multivariate modular metabolic engineering (MMME) approaches are employed to overcome the low efficacy of PPP and thus increase succinate production. Altering expression levels of different PPP enzymes have distinct effects on succinate production. Specifically, increased expression of five enzymes, i.e., Zwf, Pgl, Gnd, Tkt, and Tal, contributes to increased succinate production, while the increased expression of two enzymes, i.e., Rpe and Rpi, significantly decreases succinate production. Modular engineering strategy is employed to decompose PPP into three modules according to position and function. Engineering of Zwf/Pgl/Gnd and Tkt/Tal modules effectively increases succinate yield and production, while engineering of Rpe/Rpi module decreases. Imbalance of enzymatic reactions in PPP is alleviated using MMME approach. Finally, combinational utilization of engineered PPP and SthA transhydrogenase enables succinate yield up to 1.61 mol/mol glucose, which is 94% of theoretical maximum yield (1.71 mol/mol) and also the highest succinate yield in minimal medium to our knowledge. In summary, we systematically engineered the PPP for improving the supply of reducing equivalents and thus succinate production. Besides succinate, these PPP engineering strategies and conclusions can also be applicable to the production of other reducing equivalent-dependent biorenewables.
Comparison of holographic lens and filter systems for lateral spectrum splitting
NASA Astrophysics Data System (ADS)
Vorndran, Shelby; Chrysler, Benjamin; Kostuk, Raymond K.
2016-09-01
Spectrum splitting is an approach to increasing the conversion efficiency of a photovoltaic (PV) system. Several methods can be used to perform this function which requires efficient spatial separation of different spectral bands of the incident solar radiation. In this paper several of holographic methods for implementing spectrum splitting are reviewed along with the benefits and disadvantages associated with each approach. The review indicates that a volume holographic lens has many advantages for spectrum splitting in terms of both power conversion efficiency and energy yield. A specific design for a volume holographic spectrum splitting lens is discussed for use with high bandgap InGaP and low bandgap silicon PV cells. The holographic lenses are modeled using rigorous coupled wave analysis, and the optical efficiency is evaluated using non-sequential raytracing. A proof-of-concept off-axis holographic lens is also recorded in dichromated gelatin film and the spectral diffraction efficiency of the hologram is measured with multiple laser sources across the diffracted spectral band. The experimental volume holographic lens (VHL) characteristics are compared to an ideal spectrum splitting filter in terms of power conversion efficiency and energy yield in environments with high direct normal incidence (DNI) illumination and high levels of diffuse illumination. The results show that the experimental VHL can achieve 62.5% of the ideal filter power conversion efficiency, 64.8% of the ideal filter DNI environment energy yield, and 57.7% of the ideal diffuse environment energy yield performance.
Urban land use: Remote sensing of ground-basin permeability
NASA Technical Reports Server (NTRS)
Tinney, L. R.; Jensen, J. R.; Estes, J. E.
1975-01-01
A remote sensing analysis of the amount and type of permeable and impermeable surfaces overlying an urban recharge basin is discussed. An effective methodology for accurately generating this data as input to a safe yield study is detailed and compared to more conventional alternative approaches. The amount of area inventoried, approximately 10 sq. miles, should provide a reliable base against which automatic pattern recognition algorithms, currently under investigation for this task, can be evaluated. If successful, such approaches can significantly reduce the time and effort involved in obtaining permeability data, an important aspect of urban hydrology dynamics.
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500
Automated Purification of Recombinant Proteins: Combining High-throughput with High Yield
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chiann Tso; Moore, Priscilla A.; Auberry, Deanna L.
2006-05-01
Protein crystallography, mapping protein interactions and other approaches of current functional genomics require not only purifying large numbers of proteins but also obtaining sufficient yield and homogeneity for downstream high-throughput applications. There is a need for the development of robust automated high-throughput protein expression and purification processes to meet these requirements. We developed and compared two alternative workflows for automated purification of recombinant proteins based on expression of bacterial genes in Escherichia coli: First - a filtration separation protocol based on expression of 800 ml E. coli cultures followed by filtration purification using Ni2+-NTATM Agarose (Qiagen). Second - a smallermore » scale magnetic separation method based on expression in 25 ml cultures of E.coli followed by 96-well purification on MagneHisTM Ni2+ Agarose (Promega). Both workflows provided comparable average yields of proteins about 8 ug of purified protein per unit of OD at 600 nm of bacterial culture. We discuss advantages and limitations of the automated workflows that can provide proteins more than 90 % pure in the range of 100 ug – 45 mg per purification run as well as strategies for optimization of these protocols.« less
NASA Astrophysics Data System (ADS)
Meng, Qingfeng; Wang, Hongfei; Yan, Peng; Pan, Junxiao; Lu, Dianjun; Cui, Zhenling; Zhang, Fusuo; Chen, Xinping
2017-02-01
The food supply is being increasingly challenged by climate change and water scarcity. However, incremental changes in traditional cropping systems have achieved only limited success in meeting these multiple challenges. In this study, we applied a systematic approach, using model simulation and data from two groups of field studies conducted in the North China Plain, to develop a new cropping system that improves yield and uses water in a sustainable manner. Due to significant warming, we identified a double-maize (M-M; Zea mays L.) cropping system that replaced the traditional winter wheat (Triticum aestivum L.) -summer maize system. The M-M system improved yield by 14-31% compared with the conventionally managed wheat-maize system, and achieved similar yield compared with the incrementally adapted wheat-maize system with the optimized cultivars, planting dates, planting density and water management. More importantly, water usage was lower in the M-M system than in the wheat-maize system, and the rate of water usage was sustainable (net groundwater usage was ≤150 mm yr-1). Our study indicated that systematic assessment of adaptation and cropping system scale have great potential to address the multiple food supply challenges under changing climatic conditions.
A continuum dislocation dynamics framework for plasticity of polycrystalline materials
NASA Astrophysics Data System (ADS)
Askari, Hesam Aldin
The objective of this research is to investigate the mechanical response of polycrystals in different settings to identify the mechanisms that give rise to specific response observed in the deformation process. Particularly the large deformation of magnesium alloys and yield properties of copper in small scales are investigated. We develop a continuum dislocation dynamics framework based on dislocation mechanisms and interaction laws and implement this formulation in a viscoplastic self-consistent scheme to obtain the mechanical response in a polycrystalline system. The versatility of this method allows various applications in the study of problems involving large deformation, study of microstructure and its evolution, superplasticity, study of size effect in polycrystals and stochastic plasticity. The findings from the numerical solution are compared to the experimental results to validate the simulation results. We apply this framework to study the deformation mechanisms in magnesium alloys at moderate to fast strain rates and room temperature to 450 °C. Experiments for the same range of strain rates and temperatures were carried out to obtain the mechanical and material properties, and to compare with the numerical results. The numerical approach for magnesium is divided into four main steps; 1) room temperature unidirectional loading 2) high temperature deformation without grain boundary sliding 3) high temperature with grain boundary sliding mechanism 4) room temperature cyclic loading. We demonstrate the capability of our modeling approach in prediction of mechanical properties and texture evolution and discuss the improvement obtained by using the continuum dislocation dynamics method. The framework was also applied to nano-sized copper polycrystals to study the yield properties at small scales and address the observed yield scatter. By combining our developed method with a Monte Carlo simulation approach, the stochastic plasticity at small length scales was studied and the sources of the uncertainty in the polycrystalline structure are discussed. Our results suggest that the stochastic response is mainly because of a) stochastic plasticity due to dislocation substructure inside crystals and b) the microstructure of the polycrystalline material. The extent of the uncertainty is correlated to the "effective cell length" in the sampling procedure whether using simulations and experimental approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurz, C; LMU Munich, Munich; Park, Y
2016-06-15
Purpose: To enable adaptive intensity modulated proton therapy for sites sensitive to inter-fractional changes on the basis of accurate CBCT-based proton dose calculations. To this aim two CBCT intensity correction methods are considered: planning CT (pCT) to CBCT DIR and projection correction based on pCT DIR prior. Methods: 3 H&N and 3 prostate cancer patients with CBCT images and corresponding projections were used in this study, in addition to pCT and re-planning CT (rpCT) images (H&N only). A virtual CT (vCT) was generated by pCT to CBCT DIR. In a second approach, the vCT was used as prior for scattermore » correction of the CBCT projections to yield a CBCTcor image. BEV 2D range maps of SFUD IMPT plans were compared. For the prostate cases, the geometric accuracy of the vCT was also evaluated by contour comparison to physician delineation of the CBCTcor and original CBCT. Results: SFUD dose calculations on vCT and CBCTcor were found to be within 3mm for 97% to 99% of 2D range maps. Median range differences compared to rpCT were below 0.5mm. Analysis showed that the DIR-based vCT approach exhibits inaccuracies in the pelvic region due to the very low soft-tissue contrast in the CBCT. The CBCTcor approach yielded results closer to the original CBCT in terms of DICE coefficients than the vCT (median 0.91 vs 0.81) for targets and OARs. In general, the CBCTcor approach was less affected by inaccuracies of the DIR used during the generation of the vCT prior. Conclusion: Both techniques yield 3D CBCT images with intensities equivalent to diagnostic CT and appear suitable for IMPT dose calculation for most sites. For H&N cases, no considerable differences between the two techniques were found, while improved results of the CBCTcor were observed for pelvic cases due to the reduced sensitivity to registration inaccuracies. Deutsche Forschungsgemeinschaft (MAP); Bundesministerium fur Bildung und Forschung (01IB13001)« less
Numerical solutions of the complete Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1993-01-01
The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
A comparison of the left thoracoabdominal and Ivor-Lewis esophagectomy.
Davies, A R; Zylstra, J; Baker, C R; Gossage, J A; Dellaportas, D; Lagergren, J; Findlay, J M; Puccetti, F; El Lakis, M; Drummond, R J; Dutta, S; Mera, A; Van Hemelrijck, M; Forshaw, M J; Maynard, N D; Allum, W H; Low, D; Mason, R C
2018-03-01
The purpose of this study was to assess the oncological outcomes of a large multicenter series of left thoracoabdominal esophagectomies, and compare these to the more widely utilized Ivor-Lewis esophagectomy. With ethics approval and an established study protocol, anonymized data from five centers were merged into a structured database. The study exposure was operative approach (ILE or LTE). The primary outcome measure was time to death. Secondary outcome measures included time to tumor recurrence, positive surgical resection margins, lymph node yield, postoperative death, and hospital length of stay. Cox proportional hazards models provided hazard ratios (HR) with 95% confidence intervals (CI) adjusting for age, pathological tumor stage, tumor grade, lymphovascular invasion, and neoadjuvant treatment. Among 1228 patients (598 ILE; 630 LTE), most (86%) had adenocarcinoma (AC) and were male (81%). Comparing ILE and LTE for AC patients, no difference was seen in terms of time to death (HR 0.904 95%CI 0.749-1.1090) or time to recurrence (HR 0.973 95%CI 0.768-1.232). The risk of a positive resection margin was also similar (OR 1.022 95%CI 0.731-1.429). Median lymph node yield did not differ between approaches (LTE 21; ILE 21; P = 0.426). In-hospital mortality was 2.4%, significantly lower in the LTE group (LTE 1.3%; ILE 3.6%; P = 0.004). Median hospital stay was 11 days in the LTE group and 14 days in the ILE group (P < 0.0001). In conclusion, this is the largest series of left thoracoabdominal esophagectomies to be submitted for publication and the only one to compare two different transthoracic esophagectomy strategies. It demonstrates oncological equivalence between operative approaches but possible short- term advantages to the left thoracoabdominal esophagectomy.
Creep Deformation by Dislocation Movement in Waspaloy
Whittaker, Mark; Harrison, Will; Deen, Christopher; Rae, Cathie; Williams, Steve
2017-01-01
Creep tests of the polycrystalline nickel alloy Waspaloy have been conducted at Swansea University, for varying stress conditions at 700 °C. Investigation through use of Transmission Electron Microscopy at Cambridge University has examined the dislocation networks formed under these conditions, with particular attention paid to comparing tests performed above and below the yield stress. This paper highlights how the dislocation structures vary throughout creep and proposes a dislocation mechanism theory for creep in Waspaloy. Activation energies are calculated through approaches developed in the use of the recently formulated Wilshire Equations, and are found to differ above and below the yield stress. Low activation energies are found to be related to dislocation interaction with γ′ precipitates below the yield stress. However, significantly increased dislocation densities at stresses above yield cause an increase in the activation energy values as forest hardening becomes the primary mechanism controlling dislocation movement. It is proposed that the activation energy change is related to the stress increment provided by work hardening, as can be observed from Ti, Ni and steel results. PMID:28772421
Xie, Yi; Mun, Sungyong; Kim, Jinhyun; Wang, Nien-Hwa Linda
2002-01-01
A tandem simulated moving bed (SMB) process for insulin purification has been proposed and validated experimentally. The mixture to be separated consists of insulin, high molecular weight proteins, and zinc chloride. A systematic approach based on the standing wave design, rate model simulations, and experiments was used to develop this multicomponent separation process. The standing wave design was applied to specify the SMB operating conditions of a lab-scale unit with 10 columns. The design was validated with rate model simulations prior to experiments. The experimental results show 99.9% purity and 99% yield, which closely agree with the model predictions and the standing wave design targets. The agreement proves that the standing wave design can ensure high purity and high yield for the tandem SMB process. Compared to a conventional batch SEC process, the tandem SMB has 10% higher yield, 400% higher throughput, and 72% lower eluant consumption. In contrast, a design that ignores the effects of mass transfer and nonideal flow cannot meet the purity requirement and gives less than 96% yield.
NASA Astrophysics Data System (ADS)
Porto, Paolo; Walling, Des E.; Cogliandro, Vanessa; Callegari, Giovanni
2016-07-01
Use of the fallout radionuclides cesium-137 and excess lead-210 offers important advantages over traditional methods of quantifying erosion and soil redistribution rates. However, both radionuclides provide information on longer-term (i.e., 50-100 years) average rates of soil redistribution. Beryllium-7, with its half-life of 53 days, can provide a basis for documenting short-term soil redistribution and it has been successfully employed in several studies. However, the approach commonly used introduces several important constraints related to the timing and duration of the study period. A new approach proposed by the authors that overcomes these constraints has been successfully validated using an erosion plot experiment undertaken in southern Italy. Here, a further validation exercise undertaken in a small (1.38 ha) catchment is reported. The catchment was instrumented to measure event sediment yields and beryllium-7 measurements were employed to document the net soil loss for a series of 13 events that occurred between November 2013 and June 2015. In the absence of significant sediment storage within the catchment's ephemeral channel system and of a significant contribution from channel erosion to the measured sediment yield, the estimates of net soil loss for the individual events could be directly compared with the measured sediment yields to validate the former. The close agreement of the two sets of values is seen as successfully validating the use of beryllium-7 measurements and the new approach to obtain estimates of net soil loss for a sequence of individual events occurring over an extended period at the scale of a small catchment.
NASA Astrophysics Data System (ADS)
Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik
2017-01-01
Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.
The effect of soil moisture anomalies on maize yield in Germany
NASA Astrophysics Data System (ADS)
Peichl, Michael; Thober, Stephan; Meyer, Volker; Samaniego, Luis
2018-03-01
Crop models routinely use meteorological variations to estimate crop yield. Soil moisture, however, is the primary source of water for plant growth. The aim of this study is to investigate the intraseasonal predictability of soil moisture to estimate silage maize yield in Germany. We also evaluate how approaches considering soil moisture perform compare to those using only meteorological variables. Silage maize is one of the most widely cultivated crops in Germany because it is used as a main biomass supplier for energy production in the course of the German Energiewende (energy transition). Reduced form fixed effect panel models are employed to investigate the relationships in this study. These models are estimated for each month of the growing season to gain insights into the time-varying effects of soil moisture and meteorological variables. Temperature, precipitation, and potential evapotranspiration are used as meteorological variables. Soil moisture is transformed into anomalies which provide a measure for the interannual variation within each month. The main result of this study is that soil moisture anomalies have predictive skills which vary in magnitude and direction depending on the month. For instance, dry soil moisture anomalies in August and September reduce silage maize yield more than 10 %, other factors being equal. In contrast, dry anomalies in May increase crop yield up to 7 % because absolute soil water content is higher in May compared to August due to its seasonality. With respect to the meteorological terms, models using both temperature and precipitation have higher predictability than models using only one meteorological variable. Also, models employing only temperature exhibit elevated effects.
Lack of symmetry in employees' perceptions of the psychological contract.
Jepsen, Denise M; Rodwell, John J
2012-06-01
Despite debate on the nature of employees' perceptions of their psychological contract, little research has compared employees' and employers' sides of the psychological contract. All 80 items from both scales in the Psychological Contract Inventory were used in a survey of 436 currently working, non-student respondents. Structural equation modeling yielded nonsymmetrical perspectives on promises and obligations, highlighting the validity of approaching the issues via individual perceptions.
Prediction of antiepileptic drug treatment outcomes using machine learning
NASA Astrophysics Data System (ADS)
Colic, Sinisa; Wither, Robert G.; Lang, Min; Zhang, Liang; Eubanks, James H.; Bardakjian, Berj L.
2017-02-01
Objective. Antiepileptic drug (AED) treatments produce inconsistent outcomes, often necessitating patients to go through several drug trials until a successful treatment can be found. This study proposes the use of machine learning techniques to predict epilepsy treatment outcomes of commonly used AEDs. Approach. Machine learning algorithms were trained and evaluated using features obtained from intracranial electroencephalogram (iEEG) recordings of the epileptiform discharges observed in Mecp2-deficient mouse model of the Rett Syndrome. Previous work have linked the presence of cross-frequency coupling (I CFC) of the delta (2-5 Hz) rhythm with the fast ripple (400-600 Hz) rhythm in epileptiform discharges. Using the I CFC to label post-treatment outcomes we compared support vector machines (SVMs) and random forest (RF) machine learning classifiers for providing likelihood scores of successful treatment outcomes. Main results. (a) There was heterogeneity in AED treatment outcomes, (b) machine learning techniques could be used to rank the efficacy of AEDs by estimating likelihood scores for successful treatment outcome, (c) I CFC features yielded the most effective a priori identification of appropriate AED treatment, and (d) both classifiers performed comparably. Significance. Machine learning approaches yielded predictions of successful drug treatment outcomes which in turn could reduce the burdens of drug trials and lead to substantial improvements in patient quality of life.
Lionel, Anath C; Costain, Gregory; Monfared, Nasim; Walker, Susan; Reuter, Miriam S; Hosseini, S Mohsen; Thiruvahindrapuram, Bhooma; Merico, Daniele; Jobling, Rebekah; Nalpathamkalam, Thomas; Pellecchia, Giovanna; Sung, Wilson W L; Wang, Zhuozhi; Bikangaga, Peter; Boelman, Cyrus; Carter, Melissa T; Cordeiro, Dawn; Cytrynbaum, Cheryl; Dell, Sharon D; Dhir, Priya; Dowling, James J; Heon, Elise; Hewson, Stacy; Hiraki, Linda; Inbar-Feigenberg, Michal; Klatt, Regan; Kronick, Jonathan; Laxer, Ronald M; Licht, Christoph; MacDonald, Heather; Mercimek-Andrews, Saadet; Mendoza-Londono, Roberto; Piscione, Tino; Schneider, Rayfel; Schulze, Andreas; Silverman, Earl; Siriwardena, Komudi; Snead, O Carter; Sondheimer, Neal; Sutherland, Joanne; Vincent, Ajoy; Wasserman, Jonathan D; Weksberg, Rosanna; Shuman, Cheryl; Carew, Chris; Szego, Michael J; Hayeems, Robin Z; Basran, Raveen; Stavropoulos, Dimitri J; Ray, Peter N; Bowdin, Sarah; Meyn, M Stephen; Cohn, Ronald D; Scherer, Stephen W; Marshall, Christian R
2018-01-01
Purpose Genetic testing is an integral diagnostic component of pediatric medicine. Standard of care is often a time-consuming stepwise approach involving chromosomal microarray analysis and targeted gene sequencing panels, which can be costly and inconclusive. Whole-genome sequencing (WGS) provides a comprehensive testing platform that has the potential to streamline genetic assessments, but there are limited comparative data to guide its clinical use. Methods We prospectively recruited 103 patients from pediatric non-genetic subspecialty clinics, each with a clinical phenotype suggestive of an underlying genetic disorder, and compared the diagnostic yield and coverage of WGS with those of conventional genetic testing. Results WGS identified diagnostic variants in 41% of individuals, representing a significant increase over conventional testing results (24% P = 0.01). Genes clinically sequenced in the cohort (n = 1,226) were well covered by WGS, with a median exonic coverage of 40 × ±8 × (mean ±SD). All the molecular diagnoses made by conventional methods were captured by WGS. The 18 new diagnoses made with WGS included structural and non-exonic sequence variants not detectable with whole-exome sequencing, and confirmed recent disease associations with the genes PIGG, RNU4ATAC, TRIO, and UNC13A. Conclusion WGS as a primary clinical test provided a higher diagnostic yield than conventional genetic testing in a clinically heterogeneous cohort. PMID:28771251
Lionel, Anath C; Costain, Gregory; Monfared, Nasim; Walker, Susan; Reuter, Miriam S; Hosseini, S Mohsen; Thiruvahindrapuram, Bhooma; Merico, Daniele; Jobling, Rebekah; Nalpathamkalam, Thomas; Pellecchia, Giovanna; Sung, Wilson W L; Wang, Zhuozhi; Bikangaga, Peter; Boelman, Cyrus; Carter, Melissa T; Cordeiro, Dawn; Cytrynbaum, Cheryl; Dell, Sharon D; Dhir, Priya; Dowling, James J; Heon, Elise; Hewson, Stacy; Hiraki, Linda; Inbar-Feigenberg, Michal; Klatt, Regan; Kronick, Jonathan; Laxer, Ronald M; Licht, Christoph; MacDonald, Heather; Mercimek-Andrews, Saadet; Mendoza-Londono, Roberto; Piscione, Tino; Schneider, Rayfel; Schulze, Andreas; Silverman, Earl; Siriwardena, Komudi; Snead, O Carter; Sondheimer, Neal; Sutherland, Joanne; Vincent, Ajoy; Wasserman, Jonathan D; Weksberg, Rosanna; Shuman, Cheryl; Carew, Chris; Szego, Michael J; Hayeems, Robin Z; Basran, Raveen; Stavropoulos, Dimitri J; Ray, Peter N; Bowdin, Sarah; Meyn, M Stephen; Cohn, Ronald D; Scherer, Stephen W; Marshall, Christian R
2018-04-01
PurposeGenetic testing is an integral diagnostic component of pediatric medicine. Standard of care is often a time-consuming stepwise approach involving chromosomal microarray analysis and targeted gene sequencing panels, which can be costly and inconclusive. Whole-genome sequencing (WGS) provides a comprehensive testing platform that has the potential to streamline genetic assessments, but there are limited comparative data to guide its clinical use.MethodsWe prospectively recruited 103 patients from pediatric non-genetic subspecialty clinics, each with a clinical phenotype suggestive of an underlying genetic disorder, and compared the diagnostic yield and coverage of WGS with those of conventional genetic testing.ResultsWGS identified diagnostic variants in 41% of individuals, representing a significant increase over conventional testing results (24%; P = 0.01). Genes clinically sequenced in the cohort (n = 1,226) were well covered by WGS, with a median exonic coverage of 40 × ±8 × (mean ±SD). All the molecular diagnoses made by conventional methods were captured by WGS. The 18 new diagnoses made with WGS included structural and non-exonic sequence variants not detectable with whole-exome sequencing, and confirmed recent disease associations with the genes PIGG, RNU4ATAC, TRIO, and UNC13A.ConclusionWGS as a primary clinical test provided a higher diagnostic yield than conventional genetic testing in a clinically heterogeneous cohort.
Potential Impacts from Using Photoactive Roads as AN Air Quality Mitigation Strategy
NASA Astrophysics Data System (ADS)
Toro, C.; Jobson, B. T.; Shen, S.; Chung, S. H.; Haselbach, L.
2013-12-01
Mobile sources are major contributors to photochemical air pollution in urban areas. It has been proposed that the use of TiO2 coated roadways ('photoactive roads') could be an effective approach to reduce mobile source emissions by oxidizing NOx and VOC emissions at the roadway surface. However, studies have shown that formation of HONO and aldehydes can occur from some TiO2 treated surfaces during the photocatalytic oxidation of NOx and VOC, respectively. By changing the NOx-to-VOC ratio and generating photolabile HOx radical precursors, photoactive roads may enhance ozone formation rates in urban areas. In this work we present results that quantify NOx and VOC loss rates onto TiO2 treated asphalt and concrete samples, as well as HONO and aldehydes yields that result from the photocatalytic process. The treatment used a commercially available product. These objectives are relevant considering that the quantification of pollutant loss rates and yields of byproducts have not been determined for asphalt and that in the US more than 90% of the roadway surface is made of this material. Surface reaction probabilities (γ) and byproduct yields were determined using a CSTR photochemical chamber under varying conditions of water vapor and UV-A light intensity. Our results indicate that asphalt surfaces have a significantly higher molar yield of HONO compared to concrete surfaces with similar TiO2 loading. Concrete surfaces have reaction probabilities with NO one order of magnitude higher than asphalt samples. Fresh asphalt samples showed negligible photocatalytic activity, presumably due to absorption of TiO2 into the bitumen substrate. Laboratory-prepared asphalt samples with a higher degree of exposed aggregates showed increased HONO molar yields when compared to real-road asphalt samples, whose HONO molar yield was ~1%. Preliminary results for aldehydes formation showed similar molar yields between aged asphalt and concrete, even though aged asphalt samples had twice the TiO2 loading than concrete samples.
The effects of wildfire on the sediment yield of a coastal California watershed
Warrick, J.A.; Hatten, J.A.; Pasternack, G.B.; Gray, A.B.; Goni, M.A.; Wheatcroft, R.A.
2012-01-01
The occurrence of two wildfires separated by 31 yr in the chaparral-dominated Arroyo Seco watershed (293 km) of California provides a unique opportunity to evaluate the effects of wildfire on suspended-sediment yield. Here, we compile discharge and suspended-sediment sampling data from before and after the fires and show that the effects of the postfire responses differed markedly. The 1977 Marble Cone wildfire was followed by an exceptionally wet winter, which resulted in concentrations and fluxes of both fine and coarse suspended sediment that were ˜35 times greater than average (sediment yield during the 1978 water year was 11,000 t/km2/yr). We suggest that the combined 1977–1978 fire and flood had a recurrence interval of greater than 1000 yr. In contrast, the 2008 Basin Complex wildfire was followed by a drier than normal year, and although suspended-sediment fluxes and concentrations were significantly elevated compared to those expected for unburned conditions, the sediment yield during the 2009 water year was less than 1% of the post–Marble Cone wildfire yield. After the first postfire winters, sediment concentrations and yield decreased with time toward prefire relationships and continued to have significant rainfall dependence. We hypothesize that the differences in sediment yield were related to precipitation-enhanced hillslope erosion processes, such as rilling and mass movements. The millennial-scale effects of wildfire on sediment yield were explored further using Monte Carlo simulations, and these analyses suggest that infrequent wildfires followed by floods increase long-term suspended-sediment fluxes markedly. Thus, we suggest that the current approach of estimating sediment yield from sediment rating curves and discharge data—without including periodic perturbations from wildfires—may grossly underestimate actual sediment yields.
Singh, Kunwar P; Singh, Arun K; Gupta, Shikha; Rai, Premanjali
2012-07-01
The present study aims to investigate the individual and combined effects of temperature, pH, zero-valent bimetallic nanoparticles (ZVBMNPs) dose, and chloramphenicol (CP) concentration on the reductive degradation of CP using ZVBMNPs in aqueous medium. Iron-silver ZVBMNPs were synthesized. Batch experimental data were generated using a four-factor statistical experimental design. CP reduction by ZVBMNPs was optimized using the response surface modeling (RSM) and artificial neural network-genetic algorithm (ANN-GA) approaches. The RSM and ANN methodologies were also compared for their predictive and generalization abilities using the same training and validation data set. Reductive by-products of CP were identified using liquid chromatography-mass spectrometry technique. The optimized process variables (RSM and ANN-GA approaches) yielded CP reduction capacity of 57.37 and 57.10 mg g(-1), respectively, as compared to the experimental value of 54.0 mg g(-1) with un-optimized variables. The ANN-GA and RSM methodologies yielded comparable results and helped to achieve a higher reduction (>6%) of CP by the ZVBMNPs as compared to the experimental value. The root mean squared error, relative standard error of prediction and correlation coefficient between the measured and model-predicted values of response variable were 1.34, 3.79, and 0.964 for RSM and 0.03, 0.07, and 0.999 for ANN models for the training and 1.39, 3.47, and 0.996 for RSM and 1.25, 3.11, and 0.990 for ANN models for the validation set. Predictive and generalization abilities of both the RSM and ANN models were comparable. The synthesized ZVBMNPs may be used for an efficient reductive removal of CP from the water.
Li, Muyang; Williams, Daniel L.; Heckwolf, Marlies; ...
2016-10-04
In this paper, we explore the ability of several characterization approaches for phenotyping to extract information about plant cell wall properties in diverse maize genotypes with the goal of identifying approaches that could be used to predict the plant's response to deconstruction in a biomass-to-biofuel process. Specifically, a maize diversity panel was subjected to two high-throughput biomass characterization approaches, pyrolysis molecular beam mass spectrometry (py-MBMS) and near-infrared (NIR) spectroscopy, and chemometric models to predict a number of plant cell wall properties as well as enzymatic hydrolysis yields of glucose following either no pretreatment or with mild alkaline pretreatment. These weremore » compared to multiple linear regression (MLR) models developed from quantified properties. We were able to demonstrate that direct correlations to specific mass spectrometry ions from pyrolysis as well as characteristic regions of the second derivative of the NIR spectrum regions were comparable in their predictive capability to partial least squares (PLS) models for p-coumarate content, while the direct correlation to the spectral data was superior to the PLS for Klason lignin content and guaiacyl monomer release by thioacidolysis as assessed by cross-validation. The PLS models for prediction of hydrolysis yields using either py-MBMS or NIR spectra were superior to MLR models based on quantified properties for unpretreated biomass. However, the PLS models using the two high-throughput characterization approaches could not predict hydrolysis following alkaline pretreatment while MLR models based on quantified properties could. This is likely a consequence of quantified properties including some assessments of pretreated biomass, while the py-MBMS and NIR only utilized untreated biomass.« less
Kaur, Harparkash; Allan, Elizabeth Louise; Mamadu, Ibrahim; Hall, Zoe; Ibe, Ogochukwu; El Sherbiny, Mohamed; van Wyk, Albert; Yeung, Shunmay; Swamidoss, Isabel; Green, Michael D; Dwivedi, Prabha; Culzoni, Maria Julia; Clarke, Siân; Schellenberg, David; Fernández, Facundo M; Onwujekwe, Obinna
2015-01-01
Artemisinin-based combination therapies are recommended by the World Health Organisation (WHO) as first-line treatment for Plasmodium falciparum malaria, yet medication must be of good quality for efficacious treatment. A recent meta-analysis reported 35% (796/2,296) of antimalarial drug samples from 21 Sub-Saharan African countries, purchased from outlets predominantly using convenience sampling, failed chemical content analysis. We used three sampling strategies to purchase artemisinin-containing antimalarials (ACAs) in Enugu metropolis, Nigeria, and compared the resulting quality estimates. ACAs were purchased using three sampling approaches--convenience, mystery clients and overt, within a defined area and sampling frame in Enugu metropolis. The active pharmaceutical ingredients were assessed using high-performance liquid chromatography and confirmed by mass spectrometry at three independent laboratories. Results were expressed as percentage of APIs stated on the packaging and used to categorise each sample as acceptable quality, substandard, degraded, or falsified. Content analysis of 3024 samples purchased from 421 outlets using convenience (n=200), mystery (n=1,919) and overt (n=905) approaches, showed overall 90.8% ACAs to be of acceptable quality, 6.8% substandard, 1.3% degraded and 1.2% falsified. Convenience sampling yielded a significantly higher prevalence of poor quality ACAs, but was not evident by the mystery and overt sampling strategies both of which yielded results that were comparable between each other. Artesunate (n=135; 4 falsified) and dihydroartemisinin (n=14) monotherapy tablets, not recommended by WHO, were also identified. Randomised sampling identified fewer falsified ACAs than previously reported by convenience approaches. Our findings emphasise the need for specific consideration to be given to sampling frame and sampling approach if representative information on drug quality is to be obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Muyang; Williams, Daniel L.; Heckwolf, Marlies
In this paper, we explore the ability of several characterization approaches for phenotyping to extract information about plant cell wall properties in diverse maize genotypes with the goal of identifying approaches that could be used to predict the plant's response to deconstruction in a biomass-to-biofuel process. Specifically, a maize diversity panel was subjected to two high-throughput biomass characterization approaches, pyrolysis molecular beam mass spectrometry (py-MBMS) and near-infrared (NIR) spectroscopy, and chemometric models to predict a number of plant cell wall properties as well as enzymatic hydrolysis yields of glucose following either no pretreatment or with mild alkaline pretreatment. These weremore » compared to multiple linear regression (MLR) models developed from quantified properties. We were able to demonstrate that direct correlations to specific mass spectrometry ions from pyrolysis as well as characteristic regions of the second derivative of the NIR spectrum regions were comparable in their predictive capability to partial least squares (PLS) models for p-coumarate content, while the direct correlation to the spectral data was superior to the PLS for Klason lignin content and guaiacyl monomer release by thioacidolysis as assessed by cross-validation. The PLS models for prediction of hydrolysis yields using either py-MBMS or NIR spectra were superior to MLR models based on quantified properties for unpretreated biomass. However, the PLS models using the two high-throughput characterization approaches could not predict hydrolysis following alkaline pretreatment while MLR models based on quantified properties could. This is likely a consequence of quantified properties including some assessments of pretreated biomass, while the py-MBMS and NIR only utilized untreated biomass.« less
Increasing crop diversity mitigates weather variations and improves yield stability.
Gaudin, Amélie C M; Tolhurst, Tor N; Ker, Alan P; Janovicek, Ken; Tortora, Cristina; Martin, Ralph C; Deen, William
2015-01-01
Cropping sequence diversification provides a systems approach to reduce yield variations and improve resilience to multiple environmental stresses. Yield advantages of more diverse crop rotations and their synergistic effects with reduced tillage are well documented, but few studies have quantified the impact of these management practices on yields and their stability when soil moisture is limiting or in excess. Using yield and weather data obtained from a 31-year long term rotation and tillage trial in Ontario, we tested whether crop rotation diversity is associated with greater yield stability when abnormal weather conditions occur. We used parametric and non-parametric approaches to quantify the impact of rotation diversity (monocrop, 2-crops, 3-crops without or with one or two legume cover crops) and tillage (conventional or reduced tillage) on yield probabilities and the benefits of crop diversity under different soil moisture and temperature scenarios. Although the magnitude of rotation benefits varied with crops, weather patterns and tillage, yield stability significantly increased when corn and soybean were integrated into more diverse rotations. Introducing small grains into short corn-soybean rotation was enough to provide substantial benefits on long-term soybean yields and their stability while the effects on corn were mostly associated with the temporal niche provided by small grains for underseeded red clover or alfalfa. Crop diversification strategies increased the probability of harnessing favorable growing conditions while decreasing the risk of crop failure. In hot and dry years, diversification of corn-soybean rotations and reduced tillage increased yield by 7% and 22% for corn and soybean respectively. Given the additional advantages associated with cropping system diversification, such a strategy provides a more comprehensive approach to lowering yield variability and improving the resilience of cropping systems to multiple environmental stresses. This could help to sustain future yield levels in challenging production environments.
Addressing potential prior-data conflict when using informative priors in proof-of-concept studies.
Mutsvari, Timothy; Tytgat, Dominique; Walley, Rosalind
2016-01-01
Bayesian methods are increasingly used in proof-of-concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior-data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior-data conflict by comparing the observed data to the prior predictive distribution and resorting to a non-informative prior if prior-data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one-component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior-credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.
Mechanical properties of hydroxyapatite single crystals from nanoindentation data
Zamiri, A.; De, S.
2011-01-01
In this paper we compute elasto-plastic properties of hydroxyapatite single crystals from nanindentation data using a two-step algorithm. In the first step the yield stress is obtained using hardness and Young’s modulus data, followed by the computation of the flow parameters. The computational approach is first validated with data from existing literature. It is observed that hydroxyapatite single crystals exhibit anisotropic mechanical response with a lower yield stress along the [1010] crystallographic direction compared to the [0001] direction. Both work hardening rate and work hardening exponent are found to be higher for indentation along the [0001] crystallographic direction. The stress-strain curves extracted here could be used for developing constitutive models for hydroxyapatite single crystals. PMID:21262492
Estimation of cold stress effect on dairy cows
NASA Astrophysics Data System (ADS)
Brouček, J.; Letkovičová, M.; Kovalčuj, K.
1991-03-01
Twelve crossbred heifers (Slovak Spotted x Holstein-Friesian) were housed in an open, uninsulated barn with straw bedding and a concrete-floored yard. Minimum temperatures inside the barn were as low as -19°C. The average milk yield decreased as the temperatures approached these minima. Compared with the temperate conditions, the feed intake and blood levels of glucose and free fatty acids increased. The level of sodium declined significantly during the second cold period. Correlations and regressions between milk yield and biochemical parameters were calculated, and the results indicate that the concentrations of free fatty acids, cholesterol, and triiodothyronine and the haematocrit values may serve to predict milk production during periods of cold stress, or in lactations of 305 days.
Multiple ionization of neon by soft x-rays at ultrahigh intensity
NASA Astrophysics Data System (ADS)
Guichard, R.; Richter, M.; Rost, J.-M.; Saalmann, U.; Sorokin, A. A.; Tiedtke, K.
2013-08-01
At the free-electron laser FLASH, multiple ionization of neon atoms was quantitatively investigated at photon energies of 93.0 and 90.5 eV. For ion charge states up to 6+, we compare the respective absolute photoionization yields with results from a minimal model and an elaborate description including standard sequential and direct photoionization channels. Both approaches are based on rate equations and take into account a Gaussian spatial intensity distribution of the laser beam. From the comparison we conclude that photoionization up to a charge of 5+ can be described by the minimal model which we interpret as sequential photoionization assisted by electron shake-up processes. For higher charges, the experimental ionization yields systematically exceed the elaborate rate-based prediction.
Effect of tow alignment on the mechanical performance of 3D woven textile composites
NASA Technical Reports Server (NTRS)
Norman, Timothy L.; Allison, Patti; Baldwin, Jack W.; Gracias, Brian K.; Seesdorf, Dave
1993-01-01
Three-dimensional (3D) woven preforms are currently being considered for use as primary structural components. Lack of technology to properly manufacture, characterize and predict mechanical properties, and predict damage mechanisms leading to failure are problems facing designers of textile composite materials. Two material systems with identical specifications but different manufacturing approaches are investigated. One manufacturing approach resulted in an irregular (nonuniform) preform geometry. The other approach yielded the expected preform geometry (uniform). The objectives are to compare the mechanical properties of the uniform and nonuniform angle interlock 3D weave constructions. The effect of adding layers of laminated tape to the outer surfaces of the textile preform is also examined. Damage mechanisms are investigated and test methods are evaluated.
Ashok, Vishal; Ranganathan, Ramya; Chander, Smitha; Damodar, Sharat; Bhat, Sunil; KS, Nataraj; A, Satish Kumar; Jadav, Sachin Suresh; Rajashekaraiah, Mahesh; TS, Sundareshan
2017-01-01
Objectives: Genetic markers are crucial fort diagnostic and prognostic investigation of hematological malignancies (HM). The conventional cytogenetic study (CCS) has been the gold standard for more than five decades. However, FISH (Fluorescence in Situ Hybridization) testing has become a popular modality owing to its targeted approach and the ability to detect abnormalities in non-mitotic cells. We here aimed to compare the diagnostic yields of a FISH panel against CCS in HMs. Methods: Samples of bone marrow and peripheral blood for a total of 201 HMs were tested for specific gene rearrangements using multi-target FISH and the results were compared with those from CCS. Results: Exhibited a greater diagnostic yield with a positive result in 39.8% of the cases, as compared to 17.9% of cases detected by CCS. Cases of chronic lymphocytic leukaemia (CLL) benefited the most by FISH testing, which identified chromosomal aberrations beyond the capacity of CCS. FISH was least beneficial in myelodysplastic syndrome (MDS) where the highest concordance with CCS was exhibited. Acute lymphocytic leukaemia (ALL) demonstrated greater benefit with CCS. In addition, we found the following abnormalities to be most prevalent in HMs by FISH panel testing: RUNX1 (21q22) amplification in ALL, deletion of D13S319/LAMP1 (13q14) in CLL, CKS1B (1q21) amplification in multiple myeloma and deletion of EGR1/RPS14 (5q31/5q32) in MDS, consistent with the literature. Conclusions: In conclusion, FISH was found to be advantageous in only a subset of HMs and cannot completely replace CCS. Utilization of the two modalities in conjunction or independently should depend on the indicated HM for an optimal approach to detecting chromosomal aberrations. PMID:29286619
NASA Astrophysics Data System (ADS)
Cerovski-Darriau, C.; Stock, J. D.; Winans, W. R.
2016-12-01
Episodic storm runoff in West Maui (Hawai'i) brings plumes of terrestrially-sourced fine sediment to the nearshore ocean environment, degrading coral reef ecosystems. The sediment pollution sources were largely unknown, though suspected to be due to modern human disturbance of the landscape, and initially assumed to be from visibly obvious exposed soil on agricultural fields and unimproved roads. To determine the sediment sources and estimate a sediment budget for the West Maui watersheds, we mapped the geomorphic processes in the field and from DEMs and orthoimagery, monitored erosion rates in the field, and modeled the sediment flux using the mapped processes and corresponding rates. We found the primary source of fine sands, silts and clays to be previously unidentified fill terraces along the stream bed. These terraces, formed during legacy agricultural activity, are the banks along 40-70% of the streams where the channels intersect human-modified landscapes. Monitoring over the last year shows that a few storms erode the fill terraces 10-20 mm annually, contributing up to 100s of tonnes of sediment per catchment. Compared to the average long-term, geologic erosion rate of 0.03 mm/yr, these fill terraces alone increase the suspended sediment flux to the coral reefs by 50-90%. Stakeholders can use our resulting geomorphic process map and sediment budget to inform the location and type of mitigation effort needed to limit terrestrial sediment pollution. We compare our mapping, monitoring, and modeling (M3) approach to NOAA's OpenNSPECT model. OpenNSPECT uses empirical hydrologic and soil erosion models paired with land cover data to compare the spatially distributed sediment yield from different land-use scenarios. We determine the relative effectiveness of calculating a baseline watershed sediment yield from each approach, and the utility of calibrating OpenNSEPCT with M3 results to better forecast future sediment yields from land-use or climate change scenarios.
A Remote Sensing-Derived Corn Yield Assessment Model
NASA Astrophysics Data System (ADS)
Shrestha, Ranjay Man
Agricultural studies and food security have become critical research topics due to continuous growth in human population and simultaneous shrinkage in agricultural land. In spite of modern technological advancements to improve agricultural productivity, more studies on crop yield assessments and food productivities are still necessary to fulfill the constantly increasing food demands. Besides human activities, natural disasters such as flood and drought, along with rapid climate changes, also inflect an adverse effect on food productivities. Understanding the impact of these disasters on crop yield and making early impact estimations could help planning for any national or international food crisis. Similarly, the United States Department of Agriculture (USDA) Risk Management Agency (RMA) insurance management utilizes appropriately estimated crop yield and damage assessment information to sustain farmers' practice through timely and proper compensations. Through County Agricultural Production Survey (CAPS), the USDA National Agricultural Statistical Service (NASS) uses traditional methods of field interviews and farmer-reported survey data to perform annual crop condition monitoring and production estimations at the regional and state levels. As these manual approaches of yield estimations are highly inefficient and produce very limited samples to represent the entire area, NASS requires supplemental spatial data that provides continuous and timely information on crop production and annual yield. Compared to traditional methods, remote sensing data and products offer wider spatial extent, more accurate location information, higher temporal resolution and data distribution, and lower data cost--thus providing a complementary option for estimation of crop yield information. Remote sensing derived vegetation indices such as Normalized Difference Vegetation Index (NDVI) provide measurable statistics of potential crop growth based on the spectral reflectance and could be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield due to flood events during the growing season. Using a 2011 Missouri River flood event as a case study, field-level flood impact map on corn yield throughout the flooded regions was produced and an overall agreement of over 82.2% was achieved when compared with the reference impact map. The future research direction of this dissertation research would be to examine other major crops outside the Corn Belt region of the U.S.
Chen, Ping; Du, Qing; Liu, Xiaoming; Zhou, Li; Hussain, Sajad; Lei, Lu; Song, Chun; Wang, Xiaochun; Liu, Weiguo; Yang, Feng; Shu, Kai; Liu, Jiang; Du, Junbo; Yang, Wenyu; Yong, Taiwen
2017-01-01
The blind pursuit of high yields via increased fertilizer inputs increases the environmental costs. Relay intercropping has advantages for yield, but a strategy for N management is urgently required to decrease N inputs without yield loss in maize-soybean relay intercropping systems (IMS). Experiments were conducted with three levels of N and three planting patterns, and dry matter accumulation, nitrogen uptake, nitrogen use efficiency (NUE), competition ratio (CR), system productivity index (SPI), land equivalent ratio (LER), and crop root distribution were investigated. Our results showed that the CR of soybean was greater than 1, and that the change in root distribution in space and time resulted in an interspecific facilitation in IMS. The maximum yield of maize under monoculture maize (MM) occurred with conventional nitrogen (CN), whereas under IMS, the maximum yield occurred with reduced nitrogen (RN). The yield of monoculture soybean (MS) and of soybean in IMS both reached a maximum under RN. The LER of IMS varied from 1.85 to 2.36, and the SPI peaked under RN. Additionally, the NUE of IMS increased by 103.7% under RN compared with that under CN. In conclusion, the separation of the root ecological niche contributed to a positive interspecific facilitation, which increased the land productivity. Thus, maize-soybean relay intercropping with reduced N input provides a very useful approach to increase land productivity and avert environmental pollution.
Joshi, Rohit; Sahoo, Khirod Kumar; Tripathi, Amit Kumar; Kumar, Ritesh; Gupta, Brijesh Kumar; Pareek, Ashwani; Singla-Pareek, Sneh Lata
2018-05-01
Cytokinins play a significant role in determining grain yield in plants. Cytokinin oxidases catalyse irreversible degradation of cytokinins and hence modulate cellular cytokinin levels. Here, we studied the role of an inflorescence meristem-specific rice cytokinin oxidase - OsCKX2 - in reducing yield penalty under salinity stress conditions. We utilized an RNAi-based approach to study the function of OsCKX2 in maintaining grain yield under salinity stress condition. Ultra-performance liquid chromatography-based estimation revealed a significant increase in cytokinins in the inflorescence meristem of OsCKX2-knockdown plants. To determine if there exists a correlation between OsCKX2 levels and yield under salinity stress condition, we assessed the growth, physiology and grain yield of OsCKX2-knockdown plants vis-à-vis the wild type. OsCKX2-knockdown plants showed better vegetative growth, higher relative water content and photosynthetic efficiency and reduced electrolyte leakage as compared with the wild type under salinity stress. Importantly, we found a negative correlation between OsCKX2 expression and plant productivity as evident by assessment of agronomical parameters such as panicle branching, filled grains per plant and harvest index both under control and salinity stress conditions. These results suggest that OsCKX2, via controlling cytokinin levels, regulates floral primordial activity modulating rice grain yield under normal as well as abiotic stress conditions. © 2017 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Skakun, Sergii; Vermote, Eric; Roger, Jean-Claude; Franch, Belen
2017-01-01
Timely and accurate information on crop yield and production is critical to many applications within agriculture monitoring. Thanks to its coverage and temporal resolution, coarse spatial resolution satellite imagery has always been a source of valuable information for yield forecasting and assessment at national and regional scales. With availability of free images acquired by Landsat-8 and Sentinel-2 remote sensing satellites, it becomes possible to provide temporal resolution of an image every 3-5 days, and therefore, to develop next generation agriculture products at higher spatial resolution (10-30 m). This paper explores the combined use of Landsat-8 and Sentinel-2A for winter crop mapping and winter wheat yield assessment at regional scale. For the former, we adapt a previously developed approach for the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument at 250 m resolution that allows automatic mapping of winter crops taking into account a priori knowledge on crop calendar. For the latter, we use a generalized winter wheat yield forecasting model that is based on estimation of the peak Normalized Difference Vegetation Index (NDVI) from MODIS image time-series, and further downscaled to be applicable at 30 m resolution. We show that integration of Landsat-8 and Sentinel-2A improves both winter crop mapping and winter wheat yield assessment. In particular, the error of winter wheat yield estimates can be reduced up to 1.8 times compared to using a single satellite.
McGale, Erica; Diezel, Celia; Schuman, Meredith C; Baldwin, Ian T
2018-05-13
Plants are the primary producers in most terrestrial ecosystems and have complex defense systems to protect their produce. Defense-deficient, high-yielding agricultural monocultures attract abundant nonhuman consumers, but are alternatively defended through pesticide application and genetic engineering to produce insecticidal proteins such as Cry1Ac (Bacillus thuringiensis). These approaches alter the balance between yield protection and maximization but have been poorly contextualized to known yield-defense trade-offs in wild plants. The native plant Nicotiana attenuata was used to compare yield benefits of plants transformed to be defenseless to those with a full suite of naturally evolved defenses, or additionally transformed to ectopically produce Cry1Ac. An insecticide treatment allowed us to examine yield under different herbivore loads in N. attenuata's native habitat. Cry1Ac, herbivore damage, and growth parameters were monitored throughout the season. Biomass and reproductive correlates were measured at season end. Non-Cry1Ac-targeted herbivores dominated on noninsecticide-treated plants, and increased the yield drag of Cry1Ac-producing plants in comparison with endogenously defended or undefended plants. Insecticide-sprayed Cry1Ac-producing plants lagged less in stalk height, shoot biomass, and flower production. In direct comparison with the endogenous defenses of a native plant, Cry1Ac production did not provide yield benefits for plants under observed herbivore loads in a field study. © 2018 The Authors New Phytologist © 2018 New Phytologist Trust.
Simultaneous fitting of genomic-BLUP and Bayes-C components in a genomic prediction model.
Iheshiulor, Oscar O M; Woolliams, John A; Svendsen, Morten; Solberg, Trygve; Meuwissen, Theo H E
2017-08-24
The rapid adoption of genomic selection is due to two key factors: availability of both high-throughput dense genotyping and statistical methods to estimate and predict breeding values. The development of such methods is still ongoing and, so far, there is no consensus on the best approach. Currently, the linear and non-linear methods for genomic prediction (GP) are treated as distinct approaches. The aim of this study was to evaluate the implementation of an iterative method (called GBC) that incorporates aspects of both linear [genomic-best linear unbiased prediction (G-BLUP)] and non-linear (Bayes-C) methods for GP. The iterative nature of GBC makes it less computationally demanding similar to other non-Markov chain Monte Carlo (MCMC) approaches. However, as a Bayesian method, GBC differs from both MCMC- and non-MCMC-based methods by combining some aspects of G-BLUP and Bayes-C methods for GP. Its relative performance was compared to those of G-BLUP and Bayes-C. We used an imputed 50 K single-nucleotide polymorphism (SNP) dataset based on the Illumina Bovine50K BeadChip, which included 48,249 SNPs and 3244 records. Daughter yield deviations for somatic cell count, fat yield, milk yield, and protein yield were used as response variables. GBC was frequently (marginally) superior to G-BLUP and Bayes-C in terms of prediction accuracy and was significantly better than G-BLUP only for fat yield. On average across the four traits, GBC yielded a 0.009 and 0.006 increase in prediction accuracy over G-BLUP and Bayes-C, respectively. Computationally, GBC was very much faster than Bayes-C and similar to G-BLUP. Our results show that incorporating some aspects of G-BLUP and Bayes-C in a single model can improve accuracy of GP over the commonly used method: G-BLUP. Generally, GBC did not statistically perform better than G-BLUP and Bayes-C, probably due to the close relationships between reference and validation individuals. Nevertheless, it is a flexible tool, in the sense, that it simultaneously incorporates some aspects of linear and non-linear models for GP, thereby exploiting family relationships while also accounting for linkage disequilibrium between SNPs and genes with large effects. The application of GBC in GP merits further exploration.
Simulating effects of microtopography on wetland specific yield and hydroperiod
Summer, David M.; Wang, Xixi
2011-01-01
Specific yield and hydroperiod have proven to be useful parameters in hydrologic analysis of wetlands. Specific yield is a critical parameter to quantitatively relate hydrologic fluxes (e.g., rainfall, evapotranspiration, and runoff) and water level changes. Hydroperiod measures the temporal variability and frequency of land-surface inundation. Conventionally, hydrologic analyses used these concepts without considering the effects of land surface microtopography and assumed a smoothly-varying land surface. However, these microtopographic effects could result in small-scale variations in land surface inundation and water depth above or below the land surface, which in turn affect ecologic and hydrologic processes of wetlands. The objective of this chapter is to develop a physically-based approach for estimating specific yield and hydroperiod that enables the consideration of microtopographic features of wetlands, and to illustrate the approach at sites in the Florida Everglades. The results indicate that the physically-based approach can better capture the variations of specific yield with water level, in particular when the water level falls between the minimum and maximum land surface elevations. The suggested approach for hydroperiod computation predicted that the wetlands might be completely dry or completely wet much less frequently than suggested by the conventional approach neglecting microtopography. One reasonable generalization may be that the hydroperiod approaches presented in this chapter can be a more accurate prediction tool for water resources management to meet the specific hydroperiod threshold as required by a species of plant or animal of interest.
Kim, Hyo Jin; Turner, Timothy Lee; Jin, Yong-Su
2013-11-01
Recent advances in metabolic engineering have enabled microbial factories to compete with conventional processes for producing fuels and chemicals. Both rational and combinatorial approaches coupled with synthetic and systematic tools play central roles in metabolic engineering to create and improve a selected microbial phenotype. Compared to knowledge-based rational approaches, combinatorial approaches exploiting biological diversity and high-throughput screening have been demonstrated as more effective tools for improving various phenotypes of interest. In particular, identification of unprecedented targets to rewire metabolic circuits for maximizing yield and productivity of a target chemical has been made possible. This review highlights general principles and the features of the combinatorial approaches using various libraries to implement desired phenotypes for strain improvement. In addition, recent applications that harnessed the combinatorial approaches to produce biofuels and biochemicals will be discussed. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Saak, Aaron Wilbur
The objective of this research is to better understand the important mechanisms that control the rheology of cement paste. In order to understand these mechanisms, new experimental techniques are developed. The insights gained through these studies are then applied toward designing self-flowing materials, particularly self-compacting concrete (SCC). A new testing program is developed where both the peak and equilibrium stress flow curves of cement paste are obtained by testing only one sample. Additionally, the influence of wall slip on yield stress and viscoelastic measurements is determined using a vane. The results indicate that a slip layer develops when the shear stress approaches the yield point. A three-dimensional model relating slump to yield stress is derived as a function of cone geometry. The results indicate that the model fits experimental data for cylindrical slumps over a wide range of yield stress values for a variety of materials. When compared to other published models, the results suggest that a fundamental relationship exists between yield stress and slump that is material independent and largely independent of cone geometry. The affect of various mixing techniques on the rheology of cement paste is investigated using a rheometer as a highly controlled mixer. The results suggest that there is a characteristic shear rate where the viscosity of cement paste is minimized. The influence of particle packing density, morphology and surface area on the viscosity of cement paste is quantified. The data suggest that even though packing density increases with the addition of fine particles, the benefits are largely overshadowed by a dramatic increase in surface area. Finally, a new methodology is introduced for designing self-compacting concrete. This approach incorporates a "self-flow zone" where the rheology of the paste matrix provides high workability, yet segregation resistance. The flow properties of fresh concrete are measured using a U-tube apparatus to test the general applicability of the proposed methodology. Using the new design approach, concrete with a slump of 29 cm (11 inches) and slump flow diameter of 60.9 cm (24 inches) is produced.
Paul, Matthew J; Oszvald, Maria; Jesus, Claudia; Rajulu, Charukesi; Griffiths, Cara A
2017-07-20
Food security is a pressing global issue. New approaches are required to break through a yield ceiling that has developed in recent years for the major crops. As important as increasing yield potential is the protection of yield from abiotic stresses in an increasingly variable and unpredictable climate. Current strategies to improve yield include conventional breeding, marker-assisted breeding, quantitative trait loci (QTLs), mutagenesis, creation of hybrids, genetic modification (GM), emerging genome-editing technologies, and chemical approaches. A regulatory mechanism amenable to three of these approaches has great promise for large yield improvements. Trehalose 6-phosphate (T6P) synthesized in the low-flux trehalose biosynthetic pathway signals the availability of sucrose in plant cells as part of a whole-plant sucrose homeostatic mechanism. Modifying T6P content by GM, marker-assisted selection, and novel chemistry has improved yield in three major cereals under a range of water availabilities from severe drought through to flooding. Yield improvements have been achieved by altering carbon allocation and how carbon is used. Targeting T6P both temporally and spatially offers great promise for large yield improvements in productive (up to 20%) and marginal environments (up to 120%). This opinion paper highlights this important breakthrough in fundamental science for crop improvement. © The Author 2017. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Hubmann, Georg; Guillouet, Stephane; Nevoigt, Elke
2011-01-01
Gpd1 and Gpd2 are the two isoforms of glycerol 3-phosphate dehydrogenase (GPDH), which is the rate-controlling enzyme of glycerol formation in Saccharomyces cerevisiae. The two isoenzymes play crucial roles in osmoregulation and redox balancing. Past approaches to increase ethanol yield at the cost of reduced glycerol yield have most often been based on deletion of either one or two isogenes (GPD1 and GPD2). While single deletions of GPD1 or GPD2 reduced glycerol formation only slightly, the gpd1Δ gpd2Δ double deletion strain produced zero glycerol but showed an osmosensitive phenotype and abolished anaerobic growth. Our current approach has sought to generate “intermediate” phenotypes by reducing both isoenzyme activities without abolishing them. To this end, the GPD1 promoter was replaced in a gpd2Δ background by two lower-strength TEF1 promoter mutants. In the same manner, the activity of the GPD2 promoter was reduced in a gpd1Δ background. The resulting strains were crossed to obtain different combinations of residual GPD1 and GPD2 expression levels. Among our engineered strains we identified four candidates showing improved ethanol yields compared to the wild type. In contrast to a gpd1Δ gpd2Δ double-deletion strain, these strains were able to completely ferment the sugars under quasi-anaerobic conditions in both minimal medium and during simultaneous saccharification and fermentation (SSF) of liquefied wheat mash (wheat liquefact). This result implies that our strains can tolerate the ethanol concentration at the end of the wheat liquefact SSF (up to 90 g liter−1). Moreover, a few of these strains showed no significant reduction in osmotic stress tolerance compared to the wild type. PMID:21724879
Mittal, Vineet; Nanda, Arun
2017-12-01
Marrubium vulgare Linn (Lamiaceae) was generally extracted by conventional methods with low yield of marrubiin; these processes were not considered environment friendly. This study extracts the whole plant of M. vulgare by microwave assisted extraction (MAE) and optimizes the effect of various extraction parameters on the marrubiin yield by using Central Composite Design (CCD). The selected medicinal plant was extracted using ethanol: water (1:1) as solvent by MAE. The plant material was also extracted using a Soxhlet and the various extracts were analyzed by HPTLC to quantify the marrubiin concentration. The optimized conditions for the microwave-assisted extraction of selected medicinal plant was microwave power of 539 W, irradiation time of 373 s and solvent to drug ratio, 32 mL per g of the drug. The marrubiin concentration in MAE almost doubled relative to the traditional method (0.69 ± 0.08 to 1.35 ± 0.04%). The IC 50 for DPPH was reduced to 66.28 ± 0.6 μg/mL as compared to conventional extract (84.14 ± 0.7 μg/mL). The scanning electron micrographs of the treated and untreated drug samples further support the results. The CCD can be successfully applied to optimize the extraction parameters (MAE) for M. vulgare. Moreover, in terms of environmental impact, the MAE technique could be assumed as a 'Green approach' because the MAE approach for extraction of plant released only 92.3 g of CO 2 as compared to 3207.6 g CO 2 using the Soxhlet method of extraction.
Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu
2016-09-01
The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
McNally, Amy; Gregory J. Husak,; Molly Brown,; Carroll, Mark L.; Funk, Christopher C.; Soni Yatheendradas,; Kristi Arsenault,; Christa Peters-Lidard,; Verdin, James
2015-01-01
The Soil Moisture Active Passive (SMAP) mission will provide soil moisture data with unprecedented accuracy, resolution, and coverage, enabling models to better track agricultural drought and estimate yields. In turn, this information can be used to shape policy related to food and water from commodity markets to humanitarian relief efforts. New data alone, however, do not translate to improvements in drought and yield forecasts. New tools will be needed to transform SMAP data into agriculturally meaningful products. The objective of this study is to evaluate the possibility and efficiency of replacing the rainfall-derived soil moisture component of a crop water stress index with SMAP data. The approach is demonstrated with 0.1°-resolution, ~10-day microwave soil moisture from the European Space Agency and simulated soil moisture from the Famine Early Warning Systems Network Land Data Assimilation System. Over a West Africa domain, the approach is evaluated by comparing the different soil moisture estimates and their resulting Water Requirement Satisfaction Index values from 2000 to 2010. This study highlights how the ensemble of indices performs during wet versus dry years, over different land-cover types, and the correlation with national-level millet yields. The new approach is a feasible and useful way to quantitatively assess how satellite-derived rainfall and soil moisture track agricultural water deficits. Given the importance of soil moisture in many applications, ranging from agriculture to public health to fire, this study should inspire other modeling communities to reformulate existing tools to take advantage of SMAP data.
Balamurugan, Appakalai N; Green, Michael L; Breite, Andrew G; Loganathan, Gopalakrishnan; Wilhelm, Joshua J; Tweed, Benjamin; Vargova, Lenka; Lockridge, Amber; Kuriti, Manikya; Hughes, Michael G; Williams, Stuart K; Hering, Bernhard J; Dwulet, Francis E; McCarthy, Robert C
2016-01-01
Isolation following a good manufacturing practice-compliant, human islet product requires development of a robust islet isolation procedure where effective limits of key reagents are known. The enzymes used for islet isolation are critical but little is known about the doses of class I and class II collagenase required for successful islet isolation. We used a factorial approach to evaluate the effect of high and low target activities of recombinant class I (rC1) and class II (rC2) collagenase on human islet yield. Consequently, 4 different enzyme formulations with divergent C1:C2 collagenase mass ratios were assessed, each supplemented with the same dose of neutral protease. Both split pancreas and whole pancreas models were used to test enzyme targets (n = 20). Islet yield/g pancreas was compared with historical enzymes (n = 42). Varying the Wunsch (rC2) and collagen degradation activity (CDA, rC1) target dose, and consequently the C1:C2 mass ratio, had no significant effect on tissue digestion. Digestions using higher doses of Wunsch and CDA resulted in comparable islet yields to those obtained with 60% and 50% of those activities, respectively. Factorial analysis revealed no significant main effect of Wunsch activity or CDA for any parameter measured. Aggregate results from 4 different collagenase formulations gave 44% higher islet yield (>5000 islet equivalents/g) in the body/tail of the pancreas (n = 12) when compared with those from the same segment using a standard natural collagenase/protease mixture (n = 6). Additionally, islet yields greater than 5000 islet equivalents/g pancreas were also obtained in whole human pancreas. A broader C1:C2 ratio can be used for human islet isolation than has been used in the past. Recombinant collagenase is an effective replacement for the natural enzyme and we have determined that high islet yield can be obtained even with low doses of rC1:rC2, which is beneficial for the survival of islets.
Model based control of dynamic atomic force microscope.
Lee, Chibum; Salapaka, Srinivasa M
2015-04-01
A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.
Kumar, Arvind; Dixit, Shalabh; Ram, T.; Yadaw, R. B.; Mishra, K. K.; Mandal, N. P.
2014-01-01
The increased occurrence and severity of drought stress have led to a high yield decline in rice in recent years in drought-affected areas. Drought research at the International Rice Research Institute (IRRI) over the past decade has concentrated on direct selection for grain yield under drought. This approach has led to the successful development and release of 17 high-yielding drought-tolerant rice varieties in South Asia, Southeast Asia, and Africa. In addition to this, 14 quantitative trait loci (QTLs) showing a large effect against high-yielding drought-susceptible popular varieties were identified using grain yield as a selection criterion. Six of these (qDTY 1.1, qDTY 2.2, qDTY 3.1, qDTY 3.2, qDTY 6.1, and qDTY 12.1) showed an effect against two or more high-yielding genetic backgrounds in both the lowland and upland ecosystem, indicating their usefulness in increasing the grain yield of rice under drought. The yield of popular rice varieties IR64 and Vandana has been successfully improved through a well-planned marker-assisted backcross breeding approach, and QTL introgression in several other popular varieties is in progress. The identification of large-effect QTLs for grain yield under drought and the higher yield increase under drought obtained through the use of these QTLs (which has not been reported in other cereals) indicate that rice, because of its continuous cultivation in two diverse ecosystems (upland, drought tolerant, and lowland, drought susceptible), has benefited from the existence of larger genetic variability than in other cereals. This can be successfully exploited using marker-assisted breeding. PMID:25205576
NASA Technical Reports Server (NTRS)
Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.
2012-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.
NASA Astrophysics Data System (ADS)
Yi, Bowen; Lin, Shuyi; Yang, Bo; Zhang, Weidong
2018-02-01
This paper presents an output feedback indirect dynamic inversion (IDI) approach for a class of uncertain nonaffine systems with input unmodelled dynamics. Compared with previous approaches to achieve performance recovery, the proposed method aims at dealing with a broader class of nonaffine-in-control systems with triangular structure. An IDI state feedback law is designed first, in which less knowledge of the model plant is needed compared to earlier approximate dynamic inversion methods, thus yielding more robust performance. After that, an extended high-gain observer is designed to accomplish the task with output feedback. Finally, we prove that the designed IDI controller is equivalent to an adaptive proportional-integral (PI) controller, with respect to both time response equivalence and robustness equivalence. The conclusion implies that for the studied strict-feedback non-affine systems with unmodelled dynamics, there always exits a PI controller to stabilise the systems. The effectiveness and benefits of the designed approach are verified by three examples.
Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner
Yu, Chengyi; Chen, Xiaobo; Xi, Juntong
2017-01-01
A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844
Verveer, P. J; Gemkow, M. J; Jovin, T. M
1999-01-01
We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.
NASA Astrophysics Data System (ADS)
Boscolo, D.; Krämer, M.; Durante, M.; Fuss, M. C.; Scifoni, E.
2018-04-01
The production, diffusion, and interaction of particle beam induced water-derived radicals is studied with the a pre-chemical and chemical module of the Monte Carlo particle track structure code TRAX, based on a step by step approach. After a description of the model implemented, the chemical evolution of the most important products of water radiolysis is studied for electron, proton, helium, and carbon ion radiation at different energies. The validity of the model is verified by comparing the calculated time and LET dependent yield with experimental data from literature and other simulation approaches.
Infiltration of MHD liquid into a deformable porous material
NASA Astrophysics Data System (ADS)
Naseem, Anum; Mahmood, Asif; Siddique, J. I.; Zhao, Lifeng
2018-03-01
We analyze the capillary rise dynamics for magnetohydrodynamics (MHD) fluid flow through deformable porous material in the presence of gravity effects. The modeling is performed using mixture theory approach and mathematical manipulation yields a nonlinear free boundary problem. Due to the capillary rise action, the pressure gradient in the liquid generates a stress gradient that results in the deformation of porous substrate. The capillary rise process for MHD fluid slows down as compared to Newtonian fluid case. Numerical solutions are obtained using a method of lines approach. The graphical results are presented for important physical parameters, and comparison is presented with Newtonian fluid case.
Numerical simulations of incompressible laminar flows using viscous-inviscid interaction procedures
NASA Astrophysics Data System (ADS)
Shatalov, Alexander V.
The present method is based on Helmholtz velocity decomposition where velocity is written as a sum of irrotational (gradient of a potential) and rotational (correction due to vorticity) components. Substitution of the velocity decomposition into the continuity equation yields an equation for the potential, while substitution into the momentum equations yields equations for the velocity corrections. A continuation approach is used to relate the pressure to the gradient of the potential through a modified Bernoulli's law, which allows the elimination of the pressure variable from the momentum equations. The present work considers steady and unsteady two-dimensional incompressible flows over an infinite cylinder and NACA 0012 airfoil shape. The numerical results are compared against standard methods (stream function-vorticity and SMAC methods) and data available in literature. The results demonstrate that the proposed formulation leads to a good approximation with some possible benefits compared to the available formulations. The method is not restricted to two-dimensional flows and can be used for viscous-inviscid domain decomposition calculations.
Point cloud registration from local feature correspondences-Evaluation on challenging datasets.
Petricek, Tomas; Svoboda, Tomas
2017-01-01
Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.
Das, Lalitendu; Liu, Enshi; Saeed, Areej; Williams, David W; Hu, Hongqiang; Li, Chenlin; Ray, Allison E; Shi, Jian
2017-11-01
This study takes combined field trial, lab experiment, and economic analysis approaches to evaluate the potential of industrial hemp in comparison with kenaf, switchgrass and biomass sorghum. Agronomy data suggest that the per hectare yield (5437kg) of industrial hemp stem alone was at a similar level with switchgrass and sorghum; while the hemp plants require reduced inputs. Field trial also showed that ∼1230kg/ha hemp grain can be harvested in addition to stems. Results show a predicted ethanol yield of ∼82gallons/dry ton hemp stems, which is comparable to the other three tested feedstocks. A comparative cost analysis indicates that industrial hemp could generate higher per hectare gross profit than the other crops if both hemp grains and biofuels from hemp stem were counted. These combined evaluation results demonstrate that industrial hemp has great potential to become a promising regional commodity crop for producing both biofuels and value-added products. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multilevel joint competing risk models
NASA Astrophysics Data System (ADS)
Karunarathna, G. H. S.; Sooriyarachchi, M. R.
2017-09-01
Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).
How does spatial and temporal resolution of vegetation index impact crop yield estimation?
USDA-ARS?s Scientific Manuscript database
Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing data have long been used in crop yield estimation for decades. The process-based approach uses light use efficiency model to estimate crop yield. Vegetation index (VI) ...
NASA Astrophysics Data System (ADS)
Cohn, A.; Bragança, A.; Jeffries, G. R.
2017-12-01
An increasing share of global agricultural production can be found in the humid tropics. Therefore, an improved understanding of the mechanisms governing variability in the output of tropical agricultural systems is of increasing importance for food security including through climate change adaptation. Yet, the long window over which many tropical crops can be sown, the diversity of crop varieties and management practices combine to challenge inference into climate risk to cropping output in analyses of tropical crop-climate sensitivity employing administrative data. In this paper, we leverage a newly developed spatially explicit dataset of soybean yields in Brazil to combat this problem. The dataset was built by training a model of remotely-sensed vegetation index data and land cover classification data using a rich in situ dataset of soybean yield and management variables collected over the period 2006 to 2016. The dataset contains soybean yields by plant date, cropping frequency, and maturity group for each 5km grid cell in Brazil. We model variation in these yields using an approach enabling the estimation of the influence of management factors on the sensitivity of soybean yields to variability in: cumulative solar radiation, extreme degree days, growing degree days, flooding rain in the harvest period, and dry spells in the rainy season. We find strong variation in climate sensitivity by management class. Planting date and maturity group each explained a great deal more variation in yield sensitivity than did cropping frequency. Brazil collects comparatively fine spatial resolution yield data. But, our attempt to replicate our results using administrative soy yield data revealed substantially lesser crop-climate sensitivity; suggesting that previous analyses employing administrative data may have underestimated climate risk to tropical soy production.
Yang, Lei; Sun, Xiaowei; Yang, Fengjian; Zhao, Chunjian; Zhang, Lin; Zu, Yuangang
2012-01-01
Ionic liquid based, microwave-assisted extraction (ILMAE) was successfully applied to the extraction of proanthocyanidins from Larix gmelini bark. In this work, in order to evaluate the performance of ionic liquids in the microwave-assisted extraction process, a series of 1-alkyl-3-methylimidazolium ionic liquids with different cations and anions were evaluated for extraction yield, and 1-butyl-3-methylimidazolium bromide was selected as the optimal solvent. In addition, the ILMAE procedure for the proanthocyanidins was optimized and compared with other conventional extraction techniques. Under the optimized conditions, satisfactory extraction yield of the proanthocyanidins was obtained. Relative to other methods, the proposed approach provided higher extraction yield and lower energy consumption. The Larix gmelini bark samples before and after extraction were analyzed by Thermal gravimetric analysis, Fourier-transform infrared spectroscopy and characterized by scanning electron microscopy. The results showed that the ILMAE method is a simple and efficient technique for sample preparation. PMID:22606036
Jet-induced medium excitation in γ-hadron correlation at RHIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei; Cao, Shanshan; Luo, Tan
Both jet transport and jet-induced medium excitation are investigated simultaneously within the coupled Linear Boltzmann Transport and hydro (CoLBT-hydro) model. In this coupled approach, energy-momentum deposition from propagating jet shower partons in the elastic and radiation processes is taken as a source term in hydrodynamics and the hydro background for LBT simulation is updated for next time step. We use CoLBT-hydro model to simulate γ-jet events of Au+Au collisions at RHIC. Hadron spectra from both the hadronization of jet shower partons and jet-induced medium excitation are calculated and compared to experimental data. Parton energy loss of jet shower partons leadsmore » to the suppression of hadron yields at large z T = p h T/p γ T while medium excitations leads to enhancement of hadron yields at small z T. Meanwhile, a significant broadening of low p T hadron yields and the depletion of soft hadrons in the γ direction are observed in the calculation of γ-hadron angular correlation.« less
Jet-induced medium excitation in γ-hadron correlation at RHIC
Chen, Wei; Cao, Shanshan; Luo, Tan; ...
2017-09-25
Both jet transport and jet-induced medium excitation are investigated simultaneously within the coupled Linear Boltzmann Transport and hydro (CoLBT-hydro) model. In this coupled approach, energy-momentum deposition from propagating jet shower partons in the elastic and radiation processes is taken as a source term in hydrodynamics and the hydro background for LBT simulation is updated for next time step. We use CoLBT-hydro model to simulate γ-jet events of Au+Au collisions at RHIC. Hadron spectra from both the hadronization of jet shower partons and jet-induced medium excitation are calculated and compared to experimental data. Parton energy loss of jet shower partons leadsmore » to the suppression of hadron yields at large z T = p h T/p γ T while medium excitations leads to enhancement of hadron yields at small z T. Meanwhile, a significant broadening of low p T hadron yields and the depletion of soft hadrons in the γ direction are observed in the calculation of γ-hadron angular correlation.« less
Comparison of a gasless unilateral axillo-breast and axillary approach in robotic thyroidectomy.
Song, Chang Myeon; Cho, Yong Hee; Ji, Yong Bae; Jeong, Jin Hyeok; Kim, Dong Sun; Tae, Kyung
2013-10-01
New approaches to robotic thyroidectomy help to prevent neck scarring and improve surgical ergonomics. The purpose of this study was to compare the efficacy and advantages of a gasless unilateral axillary (GUA) approach and an axillo-breast (GUAB) approach in robotic thyroidectomy. We retrospectively reviewed the data of 131 patients who underwent robotic thyroidectomy with or without central neck dissection using a GUAB (90 cases) or GUA (41 cases) approach between September 2009 and December 2011. We excluded patients who underwent simultaneous lateral neck dissection and cases within the learning curve. We compared patient and tumor characteristics, surgical outcomes, perioperative complications, and cosmetic satisfaction between the two approaches. Robotic thyroidectomy was successful in all patients. There were no differences in terms of patient and tumor characteristics, extent of thyroidectomy and central neck dissection, operative time, and postoperative complications between the two approaches. Cosmetic satisfaction was excellent in both groups. There was no difference in satisfaction with the cosmetic result in the neck area, but the GUA patients expressed higher satisfaction with the appearance of the breast. The surgical outcomes of GUA and GUAB approaches are similar in robotic thyroidectomy. Both are safe, effective, and yield cosmetically excellent results when performed by an experienced robotic thyroid surgeon. However, a GUA approach is associated with superior cosmetic satisfaction with the appearance of the breast.
Causal Models for Mediation Analysis: An Introduction to Structural Mean Models.
Zheng, Cheng; Atkins, David C; Zhou, Xiao-Hua; Rhew, Isaac C
2015-01-01
Mediation analyses are critical to understanding why behavioral interventions work. To yield a causal interpretation, common mediation approaches must make an assumption of "sequential ignorability." The current article describes an alternative approach to causal mediation called structural mean models (SMMs). A specific SMM called a rank-preserving model (RPM) is introduced in the context of an applied example. Particular attention is given to the assumptions of both approaches to mediation. Applying both mediation approaches to the college student drinking data yield notable differences in the magnitude of effects. Simulated examples reveal instances in which the traditional approach can yield strongly biased results, whereas the RPM approach remains unbiased in these cases. At the same time, the RPM approach has its own assumptions that must be met for correct inference, such as the existence of a covariate that strongly moderates the effect of the intervention on the mediator and no unmeasured confounders that also serve as a moderator of the effect of the intervention or the mediator on the outcome. The RPM approach to mediation offers an alternative way to perform mediation analysis when there may be unmeasured confounders.
Development of LACIE CCEA-1 weather/wheat yield models. [regression analysis
NASA Technical Reports Server (NTRS)
Strommen, N. D.; Sakamoto, C. M.; Leduc, S. K.; Umberger, D. E. (Principal Investigator)
1979-01-01
The advantages and disadvantages of the casual (phenological, dynamic, physiological), statistical regression, and analog approaches to modeling for grain yield are examined. Given LACIE's primary goal of estimating wheat production for the large areas of eight major wheat-growing regions, the statistical regression approach of correlating historical yield and climate data offered the Center for Climatic and Environmental Assessment the greatest potential return within the constraints of time and data sources. The basic equation for the first generation wheat-yield model is given. Topics discussed include truncation, trend variable, selection of weather variables, episodic events, strata selection, operational data flow, weighting, and model results.
USDA-ARS?s Scientific Manuscript database
Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...
Improved expression of recombinant plant-made hEGF.
Thomas, David Rhys; Walmsley, Amanda Maree
2014-11-01
The yield of recombinant hEGF was increased approximately tenfold through a range of optimisations. Further, the recombinant protein was found to have biological activity comparable to commercial hEGF. Human epidermal growth factor (hEGF) is a powerful mitogen that can enhance the healing of a wide range of injuries, including burns, cuts, diabetic ulcers and gastric ulcers. However, despite its clinical value, hEGF is only consistently used for the treatment of chronic diabetic ulcers due to its high cost. In this study, hEGF was transiently expressed in Nicotiana benthamiana plants and targeted to the apoplast, ER and vacuole. Several other approaches were also included in a stepwise fashion to identify the optimal conditions for the expression of recombinant hEGF. Expression was found to be highest in the vacuole, while targeting hEGF to the ER caused a decrease in total soluble protein (TSP). Using a codon optimised sequence was found to increase vacuolar targeted hEGF yield by ~34 %, while it was unable to increase the yield of ER targeted hEGF. The use of the P19 silencing inhibitor was able to further increase expression by over threefold, and using 5-week-old plants significantly increased expression compared to 4- or 6-week-old-plants. The combined effect of these optimisations increased expression tenfold over the initial apoplast targeted construct to an average yield of 6.24 % of TSP. The plant-made hEGF was then shown to be equivalent to commercial E. coli derived hEGF in its ability to promote the proliferation of mouse keratinocytes. This study supports the potential for plants to be used for the commercial production of hEGF, and identifies a potential limitation for the further improvement of recombinant protein yields.
Property-Structure-Processing Relations in Polymeric Materials.
1981-07-31
increase indefinitely without indicating actual yield value and R which is a measure of the elastic character of the fluid, approaches a limiting value...appears to increase indefinitely without indicating an- actual yield value and R, which is a measure of the elastic character of the fluid, approaches a...a linear graph when log r is plotted against log x; i.e., ,I has a x" behavior at low x. Since a 0 1, this does not correspond to the classical yield
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Ezzedine, Souheil; Rubin, Yoram
2012-02-01
The significance of conditioning predictions of environmental performance metrics (EPMs) on hydrogeological data in heterogeneous porous media is addressed. Conditioning EPMs on available data reduces uncertainty and increases the reliability of model predictions. We present a rational and concise approach to investigate the impact of conditioning EPMs on data as a function of the location of the environmentally sensitive target receptor, data types and spacing between measurements. We illustrate how the concept of comparative information yield curves introduced in de Barros et al. [de Barros FPJ, Rubin Y, Maxwell R. The concept of comparative information yield curves and its application to risk-based site characterization. Water Resour Res 2009;45:W06401. doi:10.1029/2008WR007324] could be used to assess site characterization needs as a function of flow and transport dimensionality and EPMs. For a given EPM, we show how alternative uncertainty reduction metrics yield distinct gains of information from a variety of sampling schemes. Our results show that uncertainty reduction is EPM dependent (e.g., travel times) and does not necessarily indicate uncertainty reduction in an alternative EPM (e.g., human health risk). The results show how the position of the environmental target, flow dimensionality and the choice of the uncertainty reduction metric can be used to assist in field sampling campaigns.
Meng, Qingfeng; Wang, Hongfei; Yan, Peng; Pan, Junxiao; Lu, Dianjun; Cui, Zhenling; Zhang, Fusuo; Chen, Xinping
2017-01-01
The food supply is being increasingly challenged by climate change and water scarcity. However, incremental changes in traditional cropping systems have achieved only limited success in meeting these multiple challenges. In this study, we applied a systematic approach, using model simulation and data from two groups of field studies conducted in the North China Plain, to develop a new cropping system that improves yield and uses water in a sustainable manner. Due to significant warming, we identified a double-maize (M-M; Zea mays L.) cropping system that replaced the traditional winter wheat (Triticum aestivum L.) –summer maize system. The M-M system improved yield by 14–31% compared with the conventionally managed wheat-maize system, and achieved similar yield compared with the incrementally adapted wheat-maize system with the optimized cultivars, planting dates, planting density and water management. More importantly, water usage was lower in the M-M system than in the wheat-maize system, and the rate of water usage was sustainable (net groundwater usage was ≤150 mm yr−1). Our study indicated that systematic assessment of adaptation and cropping system scale have great potential to address the multiple food supply challenges under changing climatic conditions. PMID:28155860
Proctor, Darby; Essler, Jennifer; Pinto, Ana I.; Wismer, Sharon; Stoinski, Tara; Brosnan, Sarah F.; Bshary, Redouan
2012-01-01
The insight that animals' cognitive abilities are linked to their evolutionary history, and hence their ecology, provides the framework for the comparative approach. Despite primates renowned dietary complexity and social cognition, including cooperative abilities, we here demonstrate that cleaner wrasse outperform three primate species, capuchin monkeys, chimpanzees and orang-utans, in a foraging task involving a choice between two actions, both of which yield identical immediate rewards, but only one of which yields an additional delayed reward. The foraging task decisions involve partner choice in cleaners: they must service visiting client reef fish before resident clients to access both; otherwise the former switch to a different cleaner. Wild caught adult, but not juvenile, cleaners learned to solve the task quickly and relearned the task when it was reversed. The majority of primates failed to perform above chance after 100 trials, which is in sharp contrast to previous studies showing that primates easily learn to choose an action that yields immediate double rewards compared to an alternative action. In conclusion, the adult cleaners' ability to choose a superior action with initially neutral consequences is likely due to repeated exposure in nature, which leads to specific learned optimal foraging decision rules. PMID:23185293
Liang, Kaiming; Zhong, Xuhua; Huang, Nongrong; Lampayan, Rubenito M; Liu, Yanzhuo; Pan, Junfeng; Peng, Bilin; Hu, Xiangyu; Fu, Youqiang
2017-12-31
Nitrogen non-point pollution and greenhouse gas (GHG) emission are major challenges in rice production. This study examined options for both economic and environmental sustainability through optimizing water and N management. Field experiments were conducted to examine the crop yields, N use efficiency (NUE), greenhouse gas emissions, N losses under different N and water management. There were four treatments: zero N input with farmer's water management (N0), farmer's N and water management (FP), optimized N management with farmer's water management (OPT N ) and optimized N management with alternate wetting and drying irrigation (OPT N +AWD). Grain yields in OPT N and OPT N +AWD treatments increased by 13.0-17.3% compared with FP. Ammonia volatilization (AV) was the primary pathway for N loss for all treatments and accounted for over 50% of the total losses. N losses mainly occurred before mid-tillering. N losses through AV, leaching and surface runoff in OPT N were reduced by 18.9-51.6% compared with FP. OPT N +AWD further reduced N losses from surface runoff and leaching by 39.1% and 6.2% in early rice season, and by 46.7% and 23.5% in late rice season, respectively, compared with OPT N . The CH 4 emissions in OPT N +AWD were 20.4-45.4% lower than in OPT N and FP. Total global warming potential of CH 4 and N 2 O was the lowest in OPT N +AWD. On-farm comparison confirmed that N loss through runoff in OPT N +AWD was reduced by over 40% as compared with FP. OPT N and OPT N +AWD significantly increased grain yield by 6.7-13.9%. These results indicated that optimizing water and N management can be a simple and effective approach for enhancing yield with reduced environmental footprints. Copyright © 2017. Published by Elsevier B.V.
Mo, SangJoon; Lee, Sung-Kwon; Jin, Ying-Yu; Suh, Joo-Won
2016-02-01
FK506, a widely used immunosuppressant, is a 23-membered polyketide macrolide that is produced by several Streptomyces species. FK506 high-yielding strain Streptomyces sp. RM7011 was developed from the discovered Streptomyces sp. KCCM 11116P by random mutagenesis in our previous study. The results of transcript expression analysis showed that the transcription levels of tcsA, B, C, and D were increased in Streptomyces sp. RM7011 by 2.1-, 3.1-, 3.3-, and 4.1- fold, respectively, compared with Streptomyces sp. KCCM 11116P. The overexpression of tcsABCD genes in Streptomyces sp. RM7011 gave rise to approximately 2.5-fold (238.1 μg/ml) increase in the level of FK506 production compared with that of Streptomyces sp. RM7011. When vinyl pentanoate was added into the culture broth of Streptomyces sp. RM7011, the level of FK506 production was approximately 2.2-fold (207.7 μg/ml) higher than that of the unsupplemented fermentation. Furthermore, supplementing the culture broth of Streptomyces sp. RM7011 expressing tcsABCD genes with vinyl pentanoate resulted in an additional 1.7-fold improvement in the FK506 titer (498.1 μg/ml) compared with that observed under nonsupplemented condition. Overall, the level of FK506 production was increased approximately 5.2-fold by engineering the supply of allylmalonyl-CoA in the high-yielding strain Streptomyces sp. RM7011, using a combination of overexpressing tcsABCD genes and adding vinyl pentanoate, as compared with Streptomyces sp. RM7011 (95.3 μg/ml). Moreover, among the three precursors analyzed, pentanoate was the most effective precursor, supporting the highest titer of FK506 in the FK506 high-yielding strain Streptomyces sp. RM7011.
Comparative analysis of the secondary electron yield from carbon nanoparticles and pure water medium
NASA Astrophysics Data System (ADS)
Verkhovtsev, Alexey; McKinnon, Sally; de Vera, Pablo; Surdutovich, Eugene; Guatelli, Susanna; Korol, Andrei V.; Rosenfeld, Anatoly; Solov'yov, Andrey V.
2015-04-01
The production of secondary electrons generated by carbon nanoparticles and pure water medium irradiated by fast protons is studied by means of model approaches and Monte Carlo simulations. It is demonstrated that due to a prominent collective response to an external field, the nanoparticles embedded in the medium enhance the yield of low-energy electrons. The maximal enhancement is observed for electrons in the energy range where plasmons, which are excited in the nanoparticles, play the dominant role. Electron yield from a solid carbon nanoparticle composed of fullerite, a crystalline form of C60 fullerene, is demonstrated to be several times higher than that from liquid water. Decay of plasmon excitations in carbon-based nanosystems thus represents a mechanism of increase of the low-energy electron yield, similar to the case of sensitizing metal nanoparticles. This observation gives a hint for investigation of novel types of sensitizers to be composed of metallic and organic parts. Contribution to the Topical Issue "COST Action Nano-IBCT: Nano-scale Processes Behind Ion-Beam Cancer Therapy", edited by Andrey V. Solov'yov, Nigel Mason, Gustavo García and Eugene Surdutovich.
Gasqui, Patrick; Trommenschlager, Jean-Marie
2017-08-21
Milk production in dairy cow udders is a complex and dynamic physiological process that has resisted explanatory modelling thus far. The current standard model, Wood's model, is empirical in nature, represents yield in daily terms, and was published in 1967. Here, we have developed a dynamic and integrated explanatory model that describes milk yield at the scale of the milking session. Our approach allowed us to formally represent and mathematically relate biological features of known relevance while accounting for stochasticity and conditional elements in the form of explicit hypotheses, which could then be tested and validated using real-life data. Using an explanatory mathematical and biological model to explore a physiological process and pinpoint potential problems (i.e., "problem finding"), it is possible to filter out unimportant variables that can be ignored, retaining only those essential to generating the most realistic model possible. Such modelling efforts are multidisciplinary by necessity. It is also helpful downstream because model results can be compared with observed data, via parameter estimation using maximum likelihood and statistical testing using model residuals. The process in its entirety yields a coherent, robust, and thus repeatable, model.
Quality by Design approach to spray drying processing of crystalline nanosuspensions.
Kumar, Sumit; Gokhale, Rajeev; Burgess, Diane J
2014-04-10
Quality by Design (QbD) principles were explored to understand spray drying process for the conversion of liquid nanosuspensions into solid nano-crystalline dry powders using indomethacin as a model drug. The effects of critical process variables: inlet temperature, flow and aspiration rates on critical quality attributes (CQAs): particle size, moisture content, percent yield and crystallinity were investigated employing a full factorial design. A central cubic design was employed to generate the response surface for particle size and percent yield. Multiple linear regression analysis and ANOVA were employed to identify and estimate the effect of critical parameters, establish their relationship with CQAs, create design space and model the spray drying process. Inlet temperature was identified as the only significant factor (p value <0.05) to affect dry powder particle size. Higher inlet temperatures caused drug surface melting and hence aggregation of the dried nano-crystalline powders. Aspiration and flow rates were identified as significant factors affecting yield (p value <0.05). Higher yields were obtained at higher aspiration and lower flow rates. All formulations had less than 3% (w/w) moisture content. Formulations dried at higher inlet temperatures had lower moisture compared to those dried at lower inlet temperatures. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolos, R. V.; Shirikova, N. Yu.; Voronov, V. V.
A schematic microscopic method is developed to calculate the M1 transition probabilities between the mixed-symmetry and the fully symmetric states in {gamma}-soft nuclei. The method is based on the random-phase approximation-interacting boson model (RPA-IBM) boson mapping of the most collective isoscalar boson. All other boson modes with higher excitation energies, including the mixed-symmetry boson, are described in the framework of RPA. As an example the M1 transition probabilities are calculated for the {sup 124-134}Xe isotopes and compared with the experimental data. The results agree well with the data for the ratio B(M1;1{sub ms}{sup +}{yields}2{sub 2}{sup +})/B(M1;1{sub ms}{sup +}{yields}0{sub 1}{sup +}).more » However, the calculated ratio B(M1;2{sub ms}{sup +}{yields}2{sub 1}{sup +})/B(M1;1{sub ms}{sup +}{yields}0{sub 1}{sup +}) shows a significantly weaker dependence on the mass number than the experimental data.« less
Indication for double parton scatterings in W+ prompt J/ψ production at the LHC
NASA Astrophysics Data System (ADS)
Lansberg, Jean-Philippe; Shao, Hua-Sheng; Yamanaka, Nodoka
2018-06-01
We re-analyse the associated production of a prompt J / ψ and a W boson in pp collisions at the LHC following the results of the ATLAS Collaboration. We perform the first study of the Single-Parton-Scattering (SPS) contributions at the Next-to-Leading Order (NLO) in αs in the Colour-Evaporation Model (CEM), an approach based on the quark-hadron-duality. Our study provides clear indications for Double-Parton-Scattering (DPS) contributions, in particular at low transverse momenta, since our SPS CEM evaluation, which can be viewed as a conservative upper limit of the SPS yields, falls short compared to the ATLAS experimental data by 3.1 standard deviations. We also determine a finite allowed region for σeff, inversely proportional to the size of the DPS yields, corresponding to the otherwise opposed hypotheses, namely our NLO CEM evaluation and the LO direct Colour-Singlet (CS) Model contribution. In both cases, the resulting DPS yields are significantly larger than that initially assumed by ATLAS based on jet-related analyses but is consistent with their observed raw-yield azimuthal distribution and with their prompt J / ψ + J / ψ and Z+ prompt J / ψ data.
Dotta, G; Phalan, B; Silva, T W; Green, R; Balmford, A
2016-06-01
Globally, agriculture is the greatest source of threat to biodiversity, through both ongoing conversion of natural habitat and intensification of existing farmland. Land sparing and land sharing have been suggested as alternative approaches to reconcile this threat with the need for land to produce food. To examine which approach holds most promise for grassland species, we examined how bird population densities changed with farm yield (production per unit area) in the Campos of Brazil and Uruguay. We obtained information on biodiversity and crop yields from 24 sites that differed in agricultural yield. Density-yield functions were fitted for 121 bird species to describe the response of population densities to increasing farm yield, measured in terms of both food energy and profit. We categorized individual species according to how their population changed across the yield gradient as being positively or negatively affected by farming and according to whether the species' total population size was greater under land-sparing, land-sharing, or an intermediate strategy. Irrespective of the yield, most species were negatively affected by farming. Increasing yields reduced densities of approximately 80% of bird species. We estimated land sparing would result in larger populations than other sorts of strategies for 67% to 70% of negatively affected species, given current production levels, including three threatened species. This suggests that increasing yields in some areas while reducing grazing to low levels elsewhere may be the best option for bird conservation in these grasslands. Implementing such an approach would require conservation and production policies to be explicitly linked to support yield increases in farmed areas and concurrently guarantee that larger areas of lightly grazed natural grasslands are set aside for conservation. © 2015 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.
2015-07-01
In this paper we present improved methods for discriminating and quantifying Primary Biological Aerosol Particles (PBAP) by applying hierarchical agglomerative cluster analysis to multi-parameter ultra violet-light induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1×106 points on a desktop computer, allowing for each fluorescent particle in a dataset to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient dataset. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best performing methods were applied to the BEACHON-RoMBAS ambient dataset where it was found that the z-score and range normalisation methods yield similar results with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misatrribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed yielding an explict cluster attribution for each particle, improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.
Nicotine pharmacokinetics and its application to intake from smoking.
Feyerabend, C; Ings, R M; Russel, M A
1985-01-01
Five subjects were given 25 micrograms/kg nicotine intravenously over 1 min, before and after a loading period involving the smoking of six cigarettes. Plasma nicotine concentrations declined in a biphasic manner, the half-lives of the initial and terminal phases averaging 9 min and 133 min respectively. Terminal half-lives before and after the loading period were essentially the same suggesting the absence of saturation kinetics at nicotine concentrations that build up during smoking. The plasma clearance of nicotine and the volume of distribution were very high averaging 915 ml/min and 1731, respectively. Two approaches were used to calculate the nicotine intake from smoking. The average dose of nicotine absorbed from one cigarette was 1.06 mg which was 82% of the standard machine-smoked yield of 1.3 mg. To illustrate their potential use in 'nicotine titration' studies, these approaches were used to compare nicotine intake from smoking a high (2.4 mg) and low (0.6 mg) nicotine cigarette. The dose of nicotine absorbed averaged 1.14 mg and 0.86 mg per cigarette respectively, being 48% and 143% of the machine-smoked yields. PMID:3986082
Laser Synthesis of Supported Catalysts for Carbon Nanotubes
NASA Technical Reports Server (NTRS)
VanderWal, Randall L.; Ticich, Thomas M.; Sherry, Leif J.; Hall, Lee J.; Schubert, Kathy (Technical Monitor)
2003-01-01
Four methods of laser assisted catalyst generation for carbon nanotube (CNT) synthesis have been tested. These include pulsed laser transfer (PLT), photolytic deposition (PLD), photothermal deposition (PTD) and laser ablation deposition (LABD). Results from each method are compared based on CNT yield, morphology and structure. Under the conditions tested, the PLT was the easiest method to implement, required the least time and also yielded the best pattemation. The photolytic and photothermal methods required organometallics, extended processing time and partial vacuums. The latter two requirements also held for the ablation deposition approach. In addition to control of the substrate position, controlled deposition duration was necessary to achieve an active catalyst layer. Although all methods were tested on both metal and quartz substrates, only the quartz substrates proved to be inactive towards the deposited catalyst particles.
Park, Jeong-Hoon; Hong, Ji-Yeon; Jang, Hyun Chul; Oh, Seung Geun; Kim, Sang-Hyoun; Yoon, Jeong-Jun; Kim, Yong Jin
2012-03-01
A facile continuous method for dilute-acid hydrolysis of the representative red seaweed species, Gelidium amansii was developed and its hydrolysate was subsequently evaluated for fermentability. In the hydrolysis step, the hydrolysates obtained from a batch reactor and a continuous reactor were systematically compared based on fermentable sugar yield and inhibitor formation. There are many advantages to the continuous hydrolysis process. For example, the low melting point of the agar component in G. amansii facilitates improved raw material fluidity in the continuous reactor. In addition, the hydrolysate obtained from the continuous process delivered a high sugar and low inhibitor concentration, thereby leading to both high yield and high final ethanol titer in the fermentation process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Computing the Baker-Campbell-Hausdorff series and the Zassenhaus product
NASA Astrophysics Data System (ADS)
Weyrauch, Michael; Scholz, Daniel
2009-09-01
The Baker-Campbell-Hausdorff (BCH) series and the Zassenhaus product are of fundamental importance for the theory of Lie groups and their applications in physics and physical chemistry. Standard methods for the explicit construction of the BCH and Zassenhaus terms yield polynomial representations, which must be translated into the usually required commutator representation. We prove that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms. This representation entails fewer terms than the well-known Dynkin-Specht-Wever representation, which is of relevance for practical applications. Furthermore, various methods for the computation of the BCH and Zassenhaus terms are compared, and a new efficient approach for the calculation of the Zassenhaus terms is proposed. Mathematica implementations for the most efficient algorithms are provided together with comparisons of efficiency.
NASA Astrophysics Data System (ADS)
Yan, B.; Fang, N. F.; Zhang, P. C.; Shi, Z. H.
2013-03-01
SummaryUnderstanding how changes in individual land use types influence the dynamics of streamflow and sediment yield would greatly improve the predictability of the hydrological consequences of land use changes and could thus help stakeholders to make better decisions. Multivariate statistics are commonly used to compare individual land use types to control the dynamics of streamflow or sediment yields. However, one issue with the use of conventional statistical methods to address relationships between land use types and streamflow or sediment yield is multicollinearity. In this study, an integrated approach involving hydrological modelling and partial least squares regression (PLSR) was used to quantify the contributions of changes in individual land use types to changes in streamflow and sediment yield. In a case study, hydrological modelling was conducted using land use maps from four time periods (1978, 1987, 1999, and 2007) for the Upper Du watershed (8973 km2) in China using the Soil and Water Assessment Tool (SWAT). Changes in streamflow and sediment yield across the two simulations conducted using the land use maps from 2007 to 1978 were found to be related to land use changes according to a PLSR, which was used to quantify the effect of this influence at the sub-basin scale. The major land use changes that affected streamflow in the studied catchment areas were related to changes in the farmland, forest and urban areas between 1978 and 2007; the corresponding regression coefficients were 0.232, -0.147 and 1.256, respectively, and the Variable Influence on Projection (VIP) was greater than 1. The dominant first-order factors affecting the changes in sediment yield in our study were: farmland (the VIP and regression coefficient were 1.762 and 14.343, respectively) and forest (the VIP and regression coefficient were 1.517 and -7.746, respectively). The PLSR methodology presented in this paper is beneficial and novel, as it partially eliminates the co-dependency of the variables and facilitates a more unbiased view of the contribution of the changes in individual land use types to changes in streamflow and sediment yield. This practicable and simple approach could be applied to a variety of other watersheds for which time-sequenced digital land use maps are available.
A permutation testing framework to compare groups of brain networks.
Simpson, Sean L; Lyday, Robert G; Hayasaka, Satoru; Marsh, Anthony P; Laurienti, Paul J
2013-01-01
Brain network analyses have moved to the forefront of neuroimaging research over the last decade. However, methods for statistically comparing groups of networks have lagged behind. These comparisons have great appeal for researchers interested in gaining further insight into complex brain function and how it changes across different mental states and disease conditions. Current comparison approaches generally either rely on a summary metric or on mass-univariate nodal or edge-based comparisons that ignore the inherent topological properties of the network, yielding little power and failing to make network level comparisons. Gleaning deeper insights into normal and abnormal changes in complex brain function demands methods that take advantage of the wealth of data present in an entire brain network. Here we propose a permutation testing framework that allows comparing groups of networks while incorporating topological features inherent in each individual network. We validate our approach using simulated data with known group differences. We then apply the method to functional brain networks derived from fMRI data.
Global evidence of positive impacts of freshwater biodiversity on fishery yields.
Brooks, Emma Grace Elizabeth; Holland, Robert Alan; Darwall, William Robert Thomas; Eigenbrod, Felix; Tittensor, Derek
2016-05-01
An often-invoked benefit of high biodiversity is the provision of ecosystem services. However, evidence for this is largely based on data from small-scale experimental studies of relationships between biodiversity and ecosystem function that may have little relevance to real-world systems. Here, large-scale biodiversity datasets are used to test the relationship between the yield of inland capture fisheries and species richness from 100 countries. Inland waters of Africa, Europe and parts of Asia. A multimodel inference approach was used to assess inland fishery yields at the country level against species richness, waterside human population, area, elevation and various climatic variables, to determine the relative importance of species richness to fisheries yields compared with other major large-scale drivers. Secondly, the mean decadal variation in fishery yields at the country level for 1981-2010 was regressed against species richness to assess if greater diversity reduces the variability in yields over time. Despite a widespread reliance on targeting just a few species of fish, freshwater fish species richness is highly correlated with yield ( R 2 = 0.55) and remains an important and statistically significant predictor of yield once other macroecological drivers are controlled for. Freshwater richness also has a significant negative relationship with variability of yield over time in Africa ( R 2 = 0.16) but no effect in Europe. The management of inland waters should incorporate the protection of freshwater biodiversity, particularly in countries with the highest-yielding inland fisheries as these also tend to have high freshwater biodiversity. As these results suggest a link between biodiversity and stable, high-yielding fisheries, an important win-win outcome may be possible for food security and conservation of freshwater ecosystems. However, findings also highlight the urgent need for more data to fully understand and monitor the contribution of biodiversity to inland fisheries globally.
Patel, Sanjay K S; Lee, Jung-Kul; Kalia, Vipin C
2016-09-01
In this study, an integrative approach to produce biohydrogen (H2) and polyhydroxyalkanoates (PHA) from the wastes of biological origin was investigated. A defined set of mixed cultures was used for hydrolysis and the hydrolysates were used to produce H2. The effluent from H2 production stage was used for PHA production. Under batch culture, a maximum of 62 l H2/kg of pure potato peels (Total solid, TS 2 %, w/v) and 54 l H2/kg of mixed biowastes (MBW1) was recorded. Using effluent from the H2 production stage of biowaste mixture (MBW1), Bacillus cereus EGU43 could produce 195 mg PHA/l and 15.6 % (w/w). Further, supplementation of GM-2 medium (0.1×) and glucose (0.5 %) in H2 production stage effluents, resulted in significant improvements of up to 11 and 41.7 % of PHA contents, respectively. An improvement of 3.9- and 17-fold in PHA yields as compared to with and without integrative H2 production from the MBW1 has been recorded. This integrative approach seems to be a suitable process to improve the yields of H2 and PHA by mixing biowastes.
van Koulil, S; van Lankveld, W; Kraaimaat, F W; van Helmond, T; Vedder, A; van Hoorn, H; Donders, A R T; Wirken, L; Cats, H; van Riel, P L C M; Evers, A W M
2011-12-01
Patients with fibromyalgia have diminished levels of physical fitness, which may lead to functional disability and exacerbating complaints. Multidisciplinary treatment comprising cognitive-behavioural therapy (CBT) and exercise training has been shown to be effective in improving physical fitness. However, due to the high drop-out rates and large variability in patients' functioning, it was proposed that a tailored treatment approach might yield more promising treatment outcomes. High-risk fibromyalgia patients were randomly assigned to a waiting list control group (WLC) or a treatment condition (TC), with the treatment consisting of 16 twice-weekly sessions of CBT and exercise training tailored to the patient's cognitive-behavioural pattern. Physical fitness was assessed with two physical tests before and 3 months after treatment and at corresponding intervals in the WLC. Treatment effects were evaluated using linear mixed models. The level of physical fitness had improved significantly in the TC compared with the WLC. Attrition rates were low, effect sizes large and reliable change indices indicated a clinically relevant improvement among the TC. A tailored multidisciplinary treatment approach for fibromyalgia consisting of CBT and exercise training is well tolerated, yields clinically relevant changes, and appears a promising approach to improve patients' physical fitness. ClinicalTrials.gov ID NCT00268606.
On-Demand Associative Cross-Language Information Retrieval
NASA Astrophysics Data System (ADS)
Geraldo, André Pinto; Moreira, Viviane P.; Gonçalves, Marcos A.
This paper proposes the use of algorithms for mining association rules as an approach for Cross-Language Information Retrieval. These algorithms have been widely used to analyse market basket data. The idea is to map the problem of finding associations between sales items to the problem of finding term translations over a parallel corpus. The proposal was validated by means of experiments using queries in two distinct languages: Portuguese and Finnish to retrieve documents in English. The results show that the performance of our proposed approach is comparable to the performance of the monolingual baseline and to query translation via machine translation, even though these systems employ more complex Natural Language Processing techniques. The combination between machine translation and our approach yielded the best results, even outperforming the monolingual baseline.
Increasing Crop Diversity Mitigates Weather Variations and Improves Yield Stability
Gaudin, Amélie C. M.; Tolhurst, Tor N.; Ker, Alan P.; Janovicek, Ken; Tortora, Cristina; Martin, Ralph C.; Deen, William
2015-01-01
Cropping sequence diversification provides a systems approach to reduce yield variations and improve resilience to multiple environmental stresses. Yield advantages of more diverse crop rotations and their synergistic effects with reduced tillage are well documented, but few studies have quantified the impact of these management practices on yields and their stability when soil moisture is limiting or in excess. Using yield and weather data obtained from a 31-year long term rotation and tillage trial in Ontario, we tested whether crop rotation diversity is associated with greater yield stability when abnormal weather conditions occur. We used parametric and non-parametric approaches to quantify the impact of rotation diversity (monocrop, 2-crops, 3-crops without or with one or two legume cover crops) and tillage (conventional or reduced tillage) on yield probabilities and the benefits of crop diversity under different soil moisture and temperature scenarios. Although the magnitude of rotation benefits varied with crops, weather patterns and tillage, yield stability significantly increased when corn and soybean were integrated into more diverse rotations. Introducing small grains into short corn-soybean rotation was enough to provide substantial benefits on long-term soybean yields and their stability while the effects on corn were mostly associated with the temporal niche provided by small grains for underseeded red clover or alfalfa. Crop diversification strategies increased the probability of harnessing favorable growing conditions while decreasing the risk of crop failure. In hot and dry years, diversification of corn-soybean rotations and reduced tillage increased yield by 7% and 22% for corn and soybean respectively. Given the additional advantages associated with cropping system diversification, such a strategy provides a more comprehensive approach to lowering yield variability and improving the resilience of cropping systems to multiple environmental stresses. This could help to sustain future yield levels in challenging production environments. PMID:25658914
Chhikara, Sudesh; Abdullah, Hesham M; Akbari, Parisa; Schnell, Danny; Dhankher, Om Parkash
2018-05-01
Plant seed oil-based liquid transportation fuels (i.e., biodiesel and green diesel) have tremendous potential as environmentally, economically and technologically feasible alternatives to petroleum-derived fuels. Due to their nutritional and industrial importance, one of the major objectives is to increase the seed yield and oil production of oilseed crops via biotechnological approaches. Camelina sativa, an emerging oilseed crop, has been proposed as an ideal crop for biodiesel and bioproduct applications. Further increase in seed oil yield by increasing the flux of carbon from increased photosynthesis into triacylglycerol (TAG) synthesis will make this crop more profitable. To increase the oil yield, we engineered Camelina by co-expressing the Arabidopsis thaliana (L.) Heynh. diacylglycerol acyltransferase1 (DGAT1) and a yeast cytosolic glycerol-3-phosphate dehydrogenase (GPD1) genes under the control of seed-specific promoters. Plants co-expressing DGAT1 and GPD1 exhibited up to 13% higher seed oil content and up to 52% increase in seed mass compared to wild-type plants. Further, DGAT1- and GDP1-co-expressing lines showed significantly higher seed and oil yields on a dry weight basis than the wild-type controls or plants expressing DGAT1 and GPD1 alone. The oil harvest index (g oil per g total dry matter) for DGTA1- and GPD1-co-expressing lines was almost twofold higher as compared to wild type and the lines expressing DGAT1 and GPD1 alone. Therefore, combining the overexpression of TAG biosynthetic genes, DGAT1 and GPD1, appears to be a positive strategy to achieve a synergistic effect on the flux through the TAG synthesis pathway, and thereby further increase the oil yield. © 2017 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.
Fortini, Lucas B.; Cropper, Wendell P.; Zarin, Daniel J.
2015-01-01
At the Amazon estuary, the oldest logging frontier in the Amazon, no studies have comprehensively explored the potential long-term population and yield consequences of multiple timber harvests over time. Matrix population modeling is one way to simulate long-term impacts of tree harvests, but this approach has often ignored common impacts of tree harvests including incidental damage, changes in post-harvest demography, shifts in the distribution of merchantable trees, and shifts in stand composition. We designed a matrix-based forest management model that incorporates these harvest-related impacts so resulting simulations reflect forest stand dynamics under repeated timber harvests as well as the realities of local smallholder timber management systems. Using a wide range of values for management criteria (e.g., length of cutting cycle, minimum cut diameter), we projected the long-term population dynamics and yields of hundreds of timber management regimes in the Amazon estuary, where small-scale, unmechanized logging is an important economic activity. These results were then compared to find optimal stand-level and species-specific sustainable timber management (STM) regimes using a set of timber yield and population growth indicators. Prospects for STM in Amazonian tidal floodplain forests are better than for many other tropical forests. However, generally high stock recovery rates between harvests are due to the comparatively high projected mean annualized yields from fast-growing species that effectively counterbalance the projected yield declines from other species. For Amazonian tidal floodplain forests, national management guidelines provide neither the highest yields nor the highest sustained population growth for species under management. Our research shows that management guidelines specific to a region’s ecological settings can be further refined to consider differences in species demographic responses to repeated harvests. In principle, such fine-tuned management guidelines could make management more attractive, thus bridging the currently prevalent gap between tropical timber management practice and regulation. PMID:26322896
Green bio-oil extraction for oil crops
NASA Astrophysics Data System (ADS)
Zainab, H.; Nurfatirah, N.; Norfaezah, A.; Othman, H.
2016-06-01
The move towards a green bio-oil extraction technique is highlighted in this paper. The commonly practised organic solvent oil extraction technique could be replaced with a modified microwave extraction. Jatropha seeds (Jatropha curcas) were used to extract bio-oil. Clean samples were heated in an oven at 110 ° C for 24 hours to remove moisture content and ground to obtain particle size smaller than 500μm. Extraction was carried out at different extraction times 15 min, 30 min, 45 min, 60 min and 120 min to determine oil yield. The biooil yield obtained from microwave assisted extraction system at 90 minutes was 36% while that from soxhlet extraction for 6 hours was 42%. Bio-oil extracted using the microwave assisted extraction (MAE) system could enhance yield of bio-oil compared to soxhlet extraction. The MAE extraction system is rapid using only water as solvent which is a nonhazardous, environment-friendly technique compared to soxhlet extraction (SE) method using hexane as solvent. Thus, this is a green technique of bio-oil extraction using only water as extractant. Bio-oil extraction from the pyrolysis of empty fruit bunch (EFB), a biomass waste from oil palm crop, was enhanced using a biocatalyst derived from seashell waste. Oil yield for non-catalytic extraction was 43.8% while addition of seashell based biocatalyst was 44.6%. Oil yield for non-catalytic extraction was 43.8% while with addition of seashell-based biocatalyst was 44.6%. The pH of bio-oil increased from 3.5 to 4.3. The viscosity of bio-oil obtained by catalytic means increased from 20.5 to 37.8 cP. A rapid and environment friendly extraction technique is preferable to enhance bio-oil yield. The microwave assisted approach is a green, rapid and environmental friendly extraction technique for the production of bio-oil bearing crops.
Fortini, Lucas B.; Cropper, Wendell P.; Zarin, Daniel J.
2015-01-01
At the Amazon estuary, the oldest logging frontier in the Amazon, no studies have comprehensively explored the potential long-term population and yield consequences of multiple timber harvests over time. Matrix population modeling is one way to simulate long-term impacts of tree harvests, but this approach has often ignored common impacts of tree harvests including incidental damage, changes in post-harvest demography, shifts in the distribution of merchantable trees, and shifts in stand composition. We designed a matrix-based forest management model that incorporates these harvest-related impacts so resulting simulations reflect forest stand dynamics under repeated timber harvests as well as the realities of local smallholder timber management systems. Using a wide range of values for management criteria (e.g., length of cutting cycle, minimum cut diameter), we projected the long-term population dynamics and yields of hundreds of timber management regimes in the Amazon estuary, where small-scale, unmechanized logging is an important economic activity. These results were then compared to find optimal stand-level and species-specific sustainable timber management (STM) regimes using a set of timber yield and population growth indicators. Prospects for STM in Amazonian tidal floodplain forests are better than for many other tropical forests. However, generally high stock recovery rates between harvests are due to the comparatively high projected mean annualized yields from fast-growing species that effectively counterbalance the projected yield declines from other species. For Amazonian tidal floodplain forests, national management guidelines provide neither the highest yields nor the highest sustained population growth for species under management. Our research shows that management guidelines specific to a region’s ecological settings can be further refined to consider differences in species demographic responses to repeated harvests. In principle, such fine-tuned management guidelines could make management more attractive, thus bridging the currently prevalent gap between tropical timber management practice and regulation.
Speech reconstruction using a deep partially supervised neural network.
McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R
2017-08-01
Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.
Large scale shell model study of nuclear spectroscopy in nuclei around 132Sn
NASA Astrophysics Data System (ADS)
Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.
2012-10-01
The properties of low-lying 2+ states in chains of nuclei in the proximity of the magic number N=82 are investigated within a new shell model approach exploiting an iterative algorithm alternative to Lanczos. The calculation yields levels and transition strengths in overall good agreement with experiments. The comparative analysis of the E2 and M1 transitions supports, in many cases, the scheme provided by the interacting boson model.
Stereoselective synthesis of unsaturated α-amino acids.
Fanelli, Roberto; Jeanne-Julien, Louis; René, Adeline; Martinez, Jean; Cavelier, Florine
2015-06-01
Stereoselective synthesis of unsaturated α-amino acids was performed by asymmetric alkylation. Two methods were investigated and their enantiomeric excess measured and compared. The first route consisted of an enantioselective approach induced by the Corey-Lygo catalyst under chiral phase transfer conditions while the second one involved the hydroxypinanone chiral auxiliary, both implicating Schiff bases as substrate. In all cases, the use of a prochiral Schiff base gave higher enantiomeric excess and yield in the final desired amino acid.
A Cell Culture Approach to Optimized Human Corneal Endothelial Cell Function
Bartakova, Alena; Kuzmenko, Olga; Alvarez-Delfin, Karen; Kunzevitzky, Noelia J.; Goldberg, Jeffrey L.
2018-01-01
Purpose Cell-based therapies to replace corneal endothelium depend on culture methods to optimize human corneal endothelial cell (HCEC) function and minimize endothelial-mesenchymal transition (EnMT). Here we explore contribution of low-mitogenic media on stabilization of phenotypes in vitro that mimic those of HCECs in vivo. Methods HCECs were isolated from cadaveric donor corneas and expanded in vitro, comparing continuous presence of exogenous growth factors (“proliferative media”) to media without those factors (“stabilizing media”). Identity based on canonical morphology and expression of surface marker CD56, and function based on formation of tight junction barriers measured by trans-endothelial electrical resistance assays (TEER) were assessed. Results Primary HCECs cultured in proliferative media underwent EnMT after three to four passages, becoming increasingly fibroblastic. Stabilizing the cells before each passage by switching them to a media low in mitogenic growth factors and serum preserved canonical morphology and yielded a higher number of cells. HCECs cultured in stabilizing media increased both expression of the identity marker CD56 and also tight junction monolayer integrity compared to cells cultured without stabilization. Conclusions HCECs isolated from donor corneas and expanded in vitro with a low-mitogenic media stabilizing step before each passage demonstrate more canonical structural and functional features and defer EnMT, increasing the number of passages and total canonical cell yield. This approach may facilitate development of HCEC-based cell therapies. PMID:29625488
Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N
2014-12-01
Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.
Meyer, Andreas L S; Wiens, John J
2018-01-01
Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M
2012-10-01
With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.
Fusion of multi-source remote sensing data for agriculture monitoring tasks
NASA Astrophysics Data System (ADS)
Skakun, S.; Franch, B.; Vermote, E.; Roger, J. C.; Becker Reshef, I.; Justice, C. O.; Masek, J. G.; Murphy, E.
2016-12-01
Remote sensing data is essential source of information for enabling monitoring and quantification of crop state at global and regional scales. Crop mapping, state assessment, area estimation and yield forecasting are the main tasks that are being addressed within GEO-GLAM. Efficiency of agriculture monitoring can be improved when heterogeneous multi-source remote sensing datasets are integrated. Here, we present several case studies of utilizing MODIS, Landsat-8 and Sentinel-2 data along with meteorological data (growing degree days - GDD) for winter wheat yield forecasting, mapping and area estimation. Archived coarse spatial resolution data, such as MODIS, VIIRS and AVHRR, can provide daily global observations that coupled with statistical data on crop yield can enable the development of empirical models for timely yield forecasting at national level. With the availability of high-temporal and high spatial resolution Landsat-8 and Sentinel-2A imagery, course resolution empirical yield models can be downscaled to provide yield estimates at regional and field scale. In particular, we present the case study of downscaling the MODIS CMG based generalized winter wheat yield forecasting model to high spatial resolution data sets, namely harmonized Landsat-8 - Sentinel-2A surface reflectance product (HLS). Since the yield model requires corresponding in season crop masks, we propose an automatic approach to extract winter crop maps from MODIS NDVI and MERRA2 derived GDD using Gaussian mixture model (GMM). Validation for the state of Kansas (US) and Ukraine showed that the approach can yield accuracies > 90% without using reference (ground truth) data sets. Another application of yearly derived winter crop maps is their use for stratification purposes within area frame sampling for crop area estimation. In particular, one can simulate the dependence of error (coefficient of variation) on the number of samples and strata size. This approach was used for estimating the area of winter crops in Ukraine for 2013-2016. The GMM-GDD approach is further extended for HLS data to provide automatic winter crop mapping at 30 m resolution for crop yield model and area estimation. In case of persistent cloudiness, addition of Sentinel-1A synthetic aperture radar (SAR) images is explored for automatic winter crop mapping.
System matrix computation vs storage on GPU: A comparative study in cone beam CT.
Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe
2018-02-01
Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.
Influence of transport energization on the growth yield of Escherichia coli.
Muir, M; Williams, L; Ferenci, T
1985-09-01
The growth yields of Escherichia coli on glucose, lactose, galactose, maltose, maltotriose, and maltohexaose were estimated under anaerobic conditions in the absence of electron acceptors. The yields on these substrates exhibited significant differences when measured in carbon-limited chemostats at similar growth rates and compared in terms of grams (dry weight) of cells produced per mole of hexose utilized. Maltohexaose was the most efficiently utilized substrate, and galactose was the least efficiently utilized under these conditions. All these sugars were known to be metabolized to glucose 6-phosphate and produced the same pattern of fermentation products. The differences in growth yields were ascribed to differences in energy costs for transport and phosphorylation of these sugars. A formalized treatment of these factors in determining growth yields was established and used to obtain values for the cost of transport and hence the energy-coupling stoichiometries for the transport of substrates via proton symport and binding-protein-dependent mechanisms in vivo. By this approach, the proton-lactose stoichiometry was found to be 1.1 to 1.8 H+ per lactose, equivalent to approximately 0.5 ATP used per lactose transported. The cost of transporting maltose via a binding-protein-dependent mechanism was considerably higher, being over 1 to 1.2 ATP per maltose or maltodextrin transported. The formalized treatment also permitted estimation of the net ATP yield from the metabolism of these sugars; it was calculated that the growth yield data were consistent with the production of 2.8 to 3.2 ATP in the metabolism of glucose 6-phosphate to fermentation products.
Influence of transport energization on the growth yield of Escherichia coli.
Muir, M; Williams, L; Ferenci, T
1985-01-01
The growth yields of Escherichia coli on glucose, lactose, galactose, maltose, maltotriose, and maltohexaose were estimated under anaerobic conditions in the absence of electron acceptors. The yields on these substrates exhibited significant differences when measured in carbon-limited chemostats at similar growth rates and compared in terms of grams (dry weight) of cells produced per mole of hexose utilized. Maltohexaose was the most efficiently utilized substrate, and galactose was the least efficiently utilized under these conditions. All these sugars were known to be metabolized to glucose 6-phosphate and produced the same pattern of fermentation products. The differences in growth yields were ascribed to differences in energy costs for transport and phosphorylation of these sugars. A formalized treatment of these factors in determining growth yields was established and used to obtain values for the cost of transport and hence the energy-coupling stoichiometries for the transport of substrates via proton symport and binding-protein-dependent mechanisms in vivo. By this approach, the proton-lactose stoichiometry was found to be 1.1 to 1.8 H+ per lactose, equivalent to approximately 0.5 ATP used per lactose transported. The cost of transporting maltose via a binding-protein-dependent mechanism was considerably higher, being over 1 to 1.2 ATP per maltose or maltodextrin transported. The formalized treatment also permitted estimation of the net ATP yield from the metabolism of these sugars; it was calculated that the growth yield data were consistent with the production of 2.8 to 3.2 ATP in the metabolism of glucose 6-phosphate to fermentation products. PMID:3928598
Nielsen, E E; Morgan, J A T; Maher, S L; Edson, J; Gauthier, M; Pepperell, J; Holmes, B J; Bennett, M B; Ovenden, J R
2017-05-01
Archived specimens are highly valuable sources of DNA for retrospective genetic/genomic analysis. However, often limited effort has been made to evaluate and optimize extraction methods, which may be crucial for downstream applications. Here, we assessed and optimized the usefulness of abundant archived skeletal material from sharks as a source of DNA for temporal genomic studies. Six different methods for DNA extraction, encompassing two different commercial kits and three different protocols, were applied to material, so-called bio-swarf, from contemporary and archived jaws and vertebrae of tiger sharks (Galeocerdo cuvier). Protocols were compared for DNA yield and quality using a qPCR approach. For jaw swarf, all methods provided relatively high DNA yield and quality, while large differences in yield between protocols were observed for vertebrae. Similar results were obtained from samples of white shark (Carcharodon carcharias). Application of the optimized methods to 38 museum and private angler trophy specimens dating back to 1912 yielded sufficient DNA for downstream genomic analysis for 68% of the samples. No clear relationships between age of samples, DNA quality and quantity were observed, likely reflecting different preparation and storage methods for the trophies. Trial sequencing of DNA capture genomic libraries using 20 000 baits revealed that a significant proportion of captured sequences were derived from tiger sharks. This study demonstrates that archived shark jaws and vertebrae are potential high-yield sources of DNA for genomic-scale analysis. It also highlights that even for similar tissue types, a careful evaluation of extraction protocols can vastly improve DNA yield. © 2016 John Wiley & Sons Ltd.
Zhang, C; Chen, X-H; Zhang, X; Gao, L; Gao, L; Kong, P-Y; Peng, X-G; Sun, A-H; Gong, Y; Zeng, D-F; Wang, Q-Y
2010-06-01
Unmanipulated haploidentical/mismatched related transplantation with combined granulocyte-colony stimulating factor-mobilised peripheral blood stem cells (G-PBSCs) and granulocyte-colony stimulating factor-mobilised bone marrow (G-BM) has been developed as an alternative transplantation strategy for patients with haematologic malignancies. However, little information is available about the factors predicting the outcome of peripheral blood stem cell (PBSC) collection and bone marrow (BM) harvest in this transplantation. The effects of donor characteristics and procedure factors on CD34(+) cell yield were investigated. A total of 104 related healthy donors received granulocyte-colony stimulating factor (G-CSF) followed by PBSC collection and BM harvest. Male donors had significantly higher yields compared with female donors. In multiple regression analysis for peripheral blood collection, age and flow rate were negatively correlated with cell yield, whereas body mass index, pre-aphaeresis white blood cell (WBC) and circulating immature cell (CIC) counts were positively correlated with cell yields. For BM harvest, age was negatively correlated with cell yields, whereas pre-BM collection CIC counts were positively correlated with cell yield. All donors achieved the final product of >or=6 x10(6) kg(-1) recipient body weight. This transplantation strategy has been shown to be a feasible approach with acceptable outcomes in stem cell collection for patients who received HLA-haploidentical/mismatched transplantation with combined G-PBSCs and G-BM. In donors with multiple high-risk characteristics for poor aphaeresis CD34(+) cell yield, BM was an alternative source.
Statistical Evaluations of Variations in Dairy Cows’ Milk Yields as a Precursor of Earthquakes
Yamauchi, Hiroyuki; Hayakawa, Masashi; Asano, Tomokazu; Ohtani, Nobuyo; Ohta, Mitsuaki
2017-01-01
Simple Summary There are many reports of abnormal changes occurring in various natural systems prior to earthquakes. Unusual animal behavior is one of these abnormalities; however, there are few objective indicators and to date, reliability has remained uncertain. We found that milk yields of dairy cows decreased prior to an earthquake in our previous case study. In this study, we examined the reliability of decreases in milk yields as a precursor for earthquakes using long-term observation data. In the results, milk yields decreased approximately three weeks before earthquakes. We have come to the conclusion that dairy cow milk yields have applicability as an objectively observable unusual animal behavior prior to earthquakes, and dairy cows respond to some physical or chemical precursors of earthquakes. Abstract Previous studies have provided quantitative data regarding unusual animal behavior prior to earthquakes; however, few studies include long-term, observational data. Our previous study revealed that the milk yields of dairy cows decreased prior to an extremely large earthquake. To clarify whether the milk yields decrease prior to earthquakes, we examined the relationship between earthquakes of various magnitudes and daily milk yields. The observation period was one year. In the results, cross-correlation analyses revealed a significant negative correlation between earthquake occurrence and milk yields approximately three weeks beforehand. Approximately a week and a half beforehand, a positive correlation was revealed, and the correlation gradually receded to zero as the day of the earthquake approached. Future studies that use data from a longer observation period are needed because this study only considered ten earthquakes and therefore does not have strong statistical power. Additionally, we compared the milk yields with the subionospheric very low frequency/low frequency (VLF/LF) propagation data indicating ionospheric perturbations. The results showed that anomalies of VLF/LF propagation data emerged prior to all of the earthquakes following decreases in milk yields; the milk yields decreased earlier than propagation anomalies. We mention how ultralow frequency magnetic fields are a stimulus that could reduce milk yields. This study suggests that dairy cow milk yields decrease prior to earthquakes, and that they might respond to stimuli emerging earlier than ionospheric perturbations. PMID:28282889
Leibman, Mark; Shryock, Jereme J; Clements, Michael J; Hall, Michael A; Loida, Paul J; McClerren, Amanda L; McKiness, Zoe P; Phillips, Jonathan R; Rice, Elena A; Stark, Steven B
2014-09-01
Grain yield from maize hybrids continues to improve through advances in breeding and biotechnology. Despite genetic improvements to hybrid maize, grain yield from distinct maize hybrids is expected to vary across growing locations due to numerous environmental factors. In this study, we examine across-location variation in grain yield among maize hybrids in three case studies. The three case studies examine hybrid improvement through breeding, introduction of an insect protection trait or introduction of a transcription factor trait associated with increased yield. In all cases, grain yield from each hybrid population had a Gaussian distribution. Across-location distributions of grain yield from each hybrid partially overlapped. The hybrid with a higher mean grain yield typically outperformed its comparator at most, but not all, of the growing locations (a 'win rate'). These results suggest that a broad set of environmental factors similarly impacts grain yields from both conventional- and biotechnology-derived maize hybrids and that grain yields among two or more hybrids should be compared with consideration given to both mean yield performance and the frequency of locations at which each hybrid 'wins' against its comparators. From an economic standpoint, growers recognize the value of genetically improved maize hybrids that outperform comparators in the majority of locations. Grower adoption of improved maize hybrids drives increases in average U.S. maize grain yields and contributes significant value to the economy. © 2014 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
Narasimhan, S; Chiel, H J; Bhunia, S
2011-04-01
Implantable microsystems for monitoring or manipulating brain activity typically require on-chip real-time processing of multichannel neural data using ultra low-power, miniaturized electronics. In this paper, we propose an integrated-circuit/architecture-level hardware design framework for neural signal processing that exploits the nature of the signal-processing algorithm. First, we consider different power reduction techniques and compare the energy efficiency between the ultra-low frequency subthreshold and conventional superthreshold design. We show that the superthreshold design operating at a much higher frequency can achieve comparable energy dissipation by taking advantage of extensive power gating. It also provides significantly higher robustness of operation and yield under large process variations. Next, we propose an architecture level preferential design approach for further energy reduction by isolating the critical computation blocks (with respect to the quality of the output signal) and assigning them higher delay margins compared to the noncritical ones. Possible delay failures under parameter variations are confined to the noncritical components, allowing graceful degradation in quality under voltage scaling. Simulation results using prerecorded neural data from the sea-slug (Aplysia californica) show that the application of the proposed design approach can lead to significant improvement in total energy, without compromising the output signal quality under process variations, compared to conventional design approaches.
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Akgöl, Batuhan; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2017-02-01
Early generation genomic selection is superior to conventional phenotypic selection in line breeding and can be strongly improved by including additional information from preliminary yield trials. The selection of lines that enter resource-demanding multi-environment trials is a crucial decision in every line breeding program as a large amount of resources are allocated for thoroughly testing these potential varietal candidates. We compared conventional phenotypic selection with various genomic selection approaches across multiple years as well as the merit of integrating phenotypic information from preliminary yield trials into the genomic selection framework. The prediction accuracy using only phenotypic data was rather low (r = 0.21) for grain yield but could be improved by modeling genetic relationships in unreplicated preliminary yield trials (r = 0.33). Genomic selection models were nevertheless found to be superior to conventional phenotypic selection for predicting grain yield performance of lines across years (r = 0.39). We subsequently simplified the problem of predicting untested lines in untested years to predicting tested lines in untested years by combining breeding values from preliminary yield trials and predictions from genomic selection models by a heritability index. This genomic assisted selection led to a 20% increase in prediction accuracy, which could be further enhanced by an appropriate marker selection for both grain yield (r = 0.48) and protein content (r = 0.63). The easy to implement and robust genomic assisted selection gave thus a higher prediction accuracy than either conventional phenotypic or genomic selection alone. The proposed method took the complex inheritance of both low and high heritable traits into account and appears capable to support breeders in their selection decisions to develop enhanced varieties more efficiently.
Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C.
2014-01-01
Background and Aims Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Methods Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. Key Results To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait ‘total crop nitrogen uptake’ (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10–36 % more yield than those based on markers for yield per se. Conclusions This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. PMID:24984712
Santa, Cátia; Anjo, Sandra I; Manadas, Bruno
2016-07-01
Proteomic approaches are extremely valuable in many fields of research, where mass spectrometry methods have gained an increasing interest, especially because of the ability to perform quantitative analysis. Nonetheless, sample preparation prior to mass spectrometry analysis is of the utmost importance. In this work, two protein precipitation approaches, widely used for cleaning and concentrating protein samples, were tested and compared in very diluted samples solubilized in a strong buffer (containing SDS). The amount of protein recovered after acetone and TCA/acetone precipitation was assessed, as well as the protein identification and relative quantification by SWATH-MS yields were compared with the results from the same sample without precipitation. From this study, it was possible to conclude that in the case of diluted samples in denaturing buffers, the use of cold acetone as precipitation protocol is more favourable than the use of TCA/acetone in terms of reproducibility in protein recovery and number of identified and quantified proteins. Furthermore, the reproducibility in relative quantification of the proteins is even higher in samples precipitated with acetone compared with the original sample. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian; Scherzinger, William
2017-01-19
Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
Skakun, Sergii; Vermote, Eric; Roger, Jean-Claude; Franch, Belen
2018-01-01
Timely and accurate information on crop yield is critical to many applications within agriculture monitoring. Thanks to its coverage and temporal resolution, coarse spatial resolution satellite imagery has always been a source of valuable information for yield forecasting and assessment at national and regional scales. With availability of free images acquired by Landsat-8 and Sentinel-2 remote sensing satellites, it becomes possible to enable temporal resolution of an image every 3–5 days, and therefore, to develop next generation agriculture products at higher spatial resolution (30 m). This paper explores the combined use of Landsat-8 and Sentinel-2A for winter crop mapping and winter wheat assessment at regional scale. For the former, we adapt a previously developed approach for Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m resolution that allows automatic mapping of winter crops taking into account knowledge on crop calendar and without ground truth data. For the latter, we use a generalized winter wheat yield model that is based on NDVI-peak estimation and MODIS data, and further downscaled to be applicable at 30 m resolution. We show that integration of Landsat-8 and Sentinel-2A has a positive impact both for winter crop mapping and winter wheat yield assessment. In particular, the error of winter wheat yield estimates can be reduced up to 1.8 times comparing to the single satellite usage. PMID:29888751
Skakun, Sergii; Vermote, Eric; Roger, Jean-Claude; Franch, Belen
2017-01-01
Timely and accurate information on crop yield is critical to many applications within agriculture monitoring. Thanks to its coverage and temporal resolution, coarse spatial resolution satellite imagery has always been a source of valuable information for yield forecasting and assessment at national and regional scales. With availability of free images acquired by Landsat-8 and Sentinel-2 remote sensing satellites, it becomes possible to enable temporal resolution of an image every 3-5 days, and therefore, to develop next generation agriculture products at higher spatial resolution (30 m). This paper explores the combined use of Landsat-8 and Sentinel-2A for winter crop mapping and winter wheat assessment at regional scale. For the former, we adapt a previously developed approach for Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m resolution that allows automatic mapping of winter crops taking into account knowledge on crop calendar and without ground truth data. For the latter, we use a generalized winter wheat yield model that is based on NDVI-peak estimation and MODIS data, and further downscaled to be applicable at 30 m resolution. We show that integration of Landsat-8 and Sentinel-2A has a positive impact both for winter crop mapping and winter wheat yield assessment. In particular, the error of winter wheat yield estimates can be reduced up to 1.8 times comparing to the single satellite usage.
Xu, Xiangming; Passey, Thomas; Wei, Feng; Saville, Robert; Harrison, Richard J.
2015-01-01
A phenomenon of yield decline due to weak plant growth in strawberry was recently observed in non-chemo-fumigated soils, which was not associated with the soil fungal pathogen Verticillium dahliae, the main target of fumigation. Amplicon-based metagenomics was used to profile soil microbiota in order to identify microbial organisms that may have caused the yield decline. A total of 36 soil samples were obtained in 2013 and 2014 from four sites for metagenomic studies; two of the four sites had a yield-decline problem, the other two did not. More than 2000 fungal or bacterial operational taxonomy units (OTUs) were found in these samples. Relative abundance of individual OTUs was statistically compared for differences between samples from sites with or without yield decline. A total of 721 individual comparisons were statistically significant – involving 366 unique bacterial and 44 unique fungal OTUs. Based on further selection criteria, we focused on 34 bacterial and 17 fungal OTUs and found that yield decline resulted probably from one or more of the following four factors: (1) low abundance of Bacillus and Pseudomonas populations, which are well known for their ability of supressing pathogen development and/or promoting plant growth; (2) lack of the nematophagous fungus (Paecilomyces species); (3) a high level of two non-specific fungal root rot pathogens; and (4) wet soil conditions. This study demonstrated the usefulness of an amplicon-based metagenomics approach to profile soil microbiota and to detect differential abundance in microbes. PMID:26504572
Relatedness-based Multi-Entity Summarization
Gunaratna, Kalpa; Yazdavar, Amir Hossein; Thirunarayan, Krishnaprasad; Sheth, Amit; Cheng, Gong
2017-01-01
Representing world knowledge in a machine processable format is important as entities and their descriptions have fueled tremendous growth in knowledge-rich information processing platforms, services, and systems. Prominent applications of knowledge graphs include search engines (e.g., Google Search and Microsoft Bing), email clients (e.g., Gmail), and intelligent personal assistants (e.g., Google Now, Amazon Echo, and Apple’s Siri). In this paper, we present an approach that can summarize facts about a collection of entities by analyzing their relatedness in preference to summarizing each entity in isolation. Specifically, we generate informative entity summaries by selecting: (i) inter-entity facts that are similar and (ii) intra-entity facts that are important and diverse. We employ a constrained knapsack problem solving approach to efficiently compute entity summaries. We perform both qualitative and quantitative experiments and demonstrate that our approach yields promising results compared to two other stand-alone state-of-the-art entity summarization approaches. PMID:29051696
Benefits of a one health approach: An example using Rift Valley fever.
Rostal, Melinda K; Ross, Noam; Machalaba, Catherine; Cordel, Claudia; Paweska, Janusz T; Karesh, William B
2018-06-01
One Health has been promoted by international institutions as a framework to improve public health outcomes. Despite strong overall interest in One Health, country-, local- and project-level implementation remains limited, likely due to the lack of pragmatic and tested operational methods for implementation and metrics for evaluation. Here we use Rift Valley fever virus as an example to demonstrate the value of using a One Health approach for both scientific and resources advantages. We demonstrate that coordinated, a priori investigations between One Health sectors can yield higher statistical power to elucidate important public health relationships as compared to siloed investigations and post-hoc analyses. Likewise, we demonstrate that across a project or multi-ministry health study a One Health approach can result in improved resource efficiency, with resultant cost-savings (35% in the presented case). The results of these analyses demonstrate that One Health approaches can be directly and tangibly applied to health investigations.
Climate Change and Projected Impacts in Agriculture: an Example on Mediterranean Crops
NASA Astrophysics Data System (ADS)
Ferrise, R.; Moriondo, M.; Bindi, M.
2009-04-01
Recently, the availability of multi-model ensemble prediction methods has permitted the assignment of likelihoods to future climate projections. This allowed moving from the scenario-based approach to the risk-based approach in assessing the effects of climate change, thus providing more useful information for decision-makers that, as reported by Schneider (2001), need probability estimates to assess the seriousness of the projected impacts. The probabilistic approach to evaluate crop response to climate change mainly consists in applying an impact model (such as crop growth model) to a very large number of climate projections so to provide a probabilistic distribution of the variable selected to evaluate the impact. By comparing the outputs of the multi-simulation with a critical threshold (such as minimum yield below which it is not admissible to fall), it is possible to evaluate the risk related to future climate conditions. Unfortunately, such an approach is a time-consuming process due to the large number of model runs needed for such a procedure. An alternative method relies on the set up of impact response surfaces (RS) with respect to key climatic variables on which a probabilistic representation of projected changes in the same climatic variables may be overlaid (Fronzek et al. 2008). This approach was exploited within the ENSEMBLES EU Project aiming at assessing climate change impact on typical Mediterranean crops. This work presents the results of the project with a particular concerning about the assessment of risk, of durum wheat (T. turgidum L. subsp. durum (Desf.) Husn) and grapevine (Vitis vinifera L.) yield falling below fixed thresholds, using probabilistic information about future climate. Methodology The simple mechanistic crop growth models, SIRIUS Quality (Jamieson et al., 1998) and VITE-model (Bindi et al., 1997a,b), were selected to respectively simulate durum wheat and grapevine yields in present and future scenarios. SIRIUS Quality is a wheat simulation model that calculates biomass production from photosynthetically active radiation and grain growth from simple partition rules. VITE-model is a model that uses a simplified mechanistic approach based on the accumulated degree days, the radiation use efficiency and the fruit biomass index to simulate the main processes regulating grapevine development, growth and yield. The selected crop growth models were adopted to create yield RSs of both crops over the suitable cultivated area in the Mediterranean Basin. Yield RSs were calculated performing a scenario sensitivity analysis by altering the baseline climate with respect to temperature and precipitation changes. The baseline climate consisted of 30 years (1975-2005) of daily minimum and maximum temperatures, rainfall and global radiation. Meteorological data were extracted from the MARS JRC Archive and are referred to a grid with a spatial resolution of 50 Km x 50 Km covering the whole European area. The sensitivity analysis was performed for precipitation changes (from -40% to 20%) and temperature changes (from 0°C to +8°C), uniformly applied across all the year. To take in account for the effect of rising CO2, the yield RSs for future periods, were produced considering CO2 air concentration level according to the A1B SRES emission scenario. For each rainfall and temperature combination the average yield over the 30-years period was calculated. The probabilistic distribution of future yields was estimated by applying a bilinear interpolative method to overlap, onto the RSs, the data from perturbed physics experiment of Hadley Centre for future scenarios (joint distribution of annual temperature and rainfall changes). Critical thresholds of impact were determined by calculating, for each grid cell, the distribution of the 30-years average yield according to the joint distribution data for present period (1990-2010) and selecting the values that correspond to the 20th percentile of the cumulative distribution. Finally, future yields were compared with yield threshold to assess the risk of yield shortfall that, in each time period, was defined as the percentage of projected yields that not overcome the selected threshold. Results Maps of durum wheat and grapevine low productivity risk were generated for the next century over the Mediterranean Basin. For durum wheat, with the exception of Portugal and Southern Spain, in the next 30 years risk of low crop productivity shows an overall reduction, due to the fertilizing effect of CO2 increase that counterbalances for the negative impact of rising temperature and reducing rainfall. Thereafter, these latter negative effects become greater and the risk progressively increases starting from lower latitudes. Maximum risk was estimated in 2060 when strong reductions in yield were accounted all over the study area. The smaller reductions in risk, estimated for the end of the next century, may be explained by the greater uncertainty in climate projections. South Portugal, South Spain and Peloponnesus resulted the most vulnerable areas showing increase in risk probability up to 50%, while risk in Galicia, Slovenia, Croatia and central-southern France always resulted lower then present time. As regard grapevine, in the great part of the case study area, the yield seems to have beneficial effect from future climate change. In Central-Western Europe and at lower latitudes the projected yields never fall below the risk threshold, indicating a prevailing effect of CO2 fertilisation. By the other hand, Central-Northern Italy and North of Greece result the most vulnerable areas. In these regions the likelihood of reduced yields quickly rises and remains very high (>50%) until the end of the century, denoting a greater negative effect of temperature and rainfall. Conclusions From these results it may be argued that the impact of future climate change on crop yields is the resultant of the contrasting effects of changes in temperature and precipitation, CO2 increase and uncertainty in climate projections. The intensity of these effects is very site and crop dependent and may vary with time, differently affecting the assessment of risk. As a consequence, the patterns of risk of low crop productivity will change depending on which of these effects will prevail. References Bindi M. et al., 1997a "A simple model for simulation of growth and development in grapevine (Vitis vinifera L.). I. Model description". Vitis 36:67-71 Bindi M. et al., 1997b "A simple model for simulation of growth and development in grapevine (Vitis vinifera L.). II. Model validation". Vitis 36:73-76 Carter T. et al., 2006 "". Fronzek S. et al 2008 "Applying probabilistic projections of climate change with impact models: a case study for sub-arctic palsa mires in Fennoscandia". Climatic Change (submitted) Jamieson et al., 1998 "Sirius: a mechanistic model of wheat response to environmental variation". Eur. J. Agron. 8:161-179. Schneider S. 2001 "What is ‘dangerous' climate change?". Nature 411:17-19
Hirai, Kelsi K.; Groisser, Benjamin N.; Copen, William A.; Singhal, Aneesh B.; Schaechter, Judith D.
2015-01-01
Background Long-term motor outcome of acute stroke patients with severe motor impairment is difficult to predict. While measure of corticospinal tract (CST) injury based on diffusion tensor imaging (DTI) in subacute stroke patients strongly predicts motor outcome, its predictive value in acute stroke patients is unclear. Using a new DTI-based, density-weighted CST template approach, we demonstrated recently that CST injury measured in acute stroke patients with moderately-severe to severe motor impairment of the upper limb strongly predicts motor outcome of the limb at 6 months. New Method The current study compared the prognostic strength of CST injury measured in 10 acute stroke patients with moderately-severe to severe motor impairment of the upper limb by the new density-weighted CST template approach versus several variants of commonly used DTI-based approaches. Results and Comparison with Existing Methods Use of the density-weighted CST template approach yielded measurements of acute CST injury that correlated most strongly, in absolute magnitude, with 6-month upper limb strength (rs = 0.93), grip (rs = 0.94) and dexterity (rs = 0.89) compared to all other 11 approaches. Formal statistical comparison of correlation coefficients revealed that acute CST injury measured by the density-weighted CST template approach correlated significantly more strongly with 6-month upper limb strength, grip and dexterity than 9, 10 and 6 of the 11 alternative measurements, respectively. Conclusions Measurements of CST injury in acute stroke patients with substantial motor impairment by the density-weighted CST template approach may have clinical utility for anticipating healthcare needs and improving clinical trial design. PMID:26386285
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Comparison of methods for developing the dynamics of rigid-body systems
NASA Technical Reports Server (NTRS)
Ju, M. S.; Mansour, J. M.
1989-01-01
Several approaches for developing the equations of motion for a three-degree-of-freedom PUMA robot were compared on the basis of computational efficiency (i.e., the number of additions, subtractions, multiplications, and divisions). Of particular interest was the investigation of the use of computer algebra as a tool for developing the equations of motion. Three approaches were implemented algebraically: Lagrange's method, Kane's method, and Wittenburg's method. Each formulation was developed in absolute and relative coordinates. These six cases were compared to each other and to a recursive numerical formulation. The results showed that all of the formulations implemented algebraically required fewer calculations than the recursive numerical algorithm. The algebraic formulations required fewer calculations in absolute coordinates than in relative coordinates. Each of the algebraic formulations could be simplified, using patterns from Kane's method, to yield the same number of calculations in a given coordinate system.
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
Quantifying phase synchronization using instances of Hilbert phase slips
NASA Astrophysics Data System (ADS)
Govindan, R. B.
2018-07-01
We propose to quantify phase synchronization between two signals, x(t) and y(t), by calculating variance in the Hilbert phase of y(t) at instances of phase slips exhibited by x(t). The proposed approach is tested on numerically simulated coupled chaotic Roessler systems and second order autoregressive processes. Furthermore we compare the performance of the proposed and original approaches using uterine electromyogram signals and show that both approaches yield consistent results A standard phase synchronization approach, which involves unwrapping the Hilbert phases (ϕ1(t) and ϕ2(t)) of the two signals and analyzing the variance in the | n ṡϕ1(t) - m ṡϕ2(t) | , mod 2 π, (n and m are integers), was used for comparison. The synchronization indexes obtained from the proposed approach and the standard approach agree reasonably well in all of the systems studied in this work. Our results indicate that the proposed approach, unlike the traditional approach, does not require the non-invertible transformations - unwrapping of the phases and calculation of mod 2 π and it can be used to reliably to quantify phase synchrony between two signals.
Tao, Fulu; Rötter, Reimund P; Palosuo, Taru; Gregorio Hernández Díaz-Ambrona, Carlos; Mínguez, M Inés; Semenov, Mikhail A; Kersebaum, Kurt Christian; Nendel, Claas; Specka, Xenia; Hoffmann, Holger; Ewert, Frank; Dambreville, Anaelle; Martre, Pierre; Rodríguez, Lucía; Ruiz-Ramos, Margarita; Gaiser, Thomas; Höhn, Jukka G; Salo, Tapio; Ferrise, Roberto; Bindi, Marco; Cammarano, Davide; Schulman, Alan H
2018-03-01
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple-ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple-ensemble probabilistic assessment, the median of simulated yield change was -4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981-2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple-ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources. © 2017 John Wiley & Sons Ltd.
Kinetic and potential sputtering of an anorthite-like glassy thin film
Hijazi, H.; Bannister, M. E.; Meyer, H. M.; ...
2017-07-28
In this paper, we present measurements of He + and He +2 ion-induced sputtering of an anorthite-like thin film at a fixed solar wind-relevant impact energy of ~0.5 keV/amu using a quartz crystal microbalance approach (QCM) for determination of total absolute sputtering yields. He +2 ions are the most abundant multicharged ions in the solar wind, and increased sputtering by these ions in comparison to equivelocity He + ions is expected to have the biggest effect on the overall sputtering efficiency of solar wind impact on the Moon. These measurements indicate an almost 70% increase of the sputtering yield formore » doubly charged incident He ions compared to that for same velocity He + impact (14.6 amu/ion for He +2 vs. 8.7 amu/ion for He+). Using a selective sputtering model, the new QCM results presented here, together with previously published results for Ar +q ions and SRIM results for the relevant kinetic-sputtering yields, the effect due to multicharged-solar-wind-ion impact on local near-surface modification of lunar anorthite-like soil is explored. It is shown that the multicharged-solar-wind component leads to a more pronounced and significant differentiation of depleted and enriched surface elements as well as a shortening of the timescale over which such surface-compositional modifications might occur in astrophysical settings. Additionally, to validate previous and future determinations of multicharged-ion-induced sputtering enhancement for those cases where the QCM approach cannot be used, relative quadrupole mass spectrometry (QMS)-based measurements are presented for the same anorthite-like thin film as were investigated by QCM, and their suitability and limitations for charge state-enhanced yield measurements are discussed.« less
Kinetic and potential sputtering of an anorthite-like glassy thin film
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hijazi, H.; Bannister, M. E.; Meyer, H. M.
In this paper, we present measurements of He + and He +2 ion-induced sputtering of an anorthite-like thin film at a fixed solar wind-relevant impact energy of ~0.5 keV/amu using a quartz crystal microbalance approach (QCM) for determination of total absolute sputtering yields. He +2 ions are the most abundant multicharged ions in the solar wind, and increased sputtering by these ions in comparison to equivelocity He + ions is expected to have the biggest effect on the overall sputtering efficiency of solar wind impact on the Moon. These measurements indicate an almost 70% increase of the sputtering yield formore » doubly charged incident He ions compared to that for same velocity He + impact (14.6 amu/ion for He +2 vs. 8.7 amu/ion for He+). Using a selective sputtering model, the new QCM results presented here, together with previously published results for Ar +q ions and SRIM results for the relevant kinetic-sputtering yields, the effect due to multicharged-solar-wind-ion impact on local near-surface modification of lunar anorthite-like soil is explored. It is shown that the multicharged-solar-wind component leads to a more pronounced and significant differentiation of depleted and enriched surface elements as well as a shortening of the timescale over which such surface-compositional modifications might occur in astrophysical settings. Additionally, to validate previous and future determinations of multicharged-ion-induced sputtering enhancement for those cases where the QCM approach cannot be used, relative quadrupole mass spectrometry (QMS)-based measurements are presented for the same anorthite-like thin film as were investigated by QCM, and their suitability and limitations for charge state-enhanced yield measurements are discussed.« less
Kinetic and potential sputtering of an anorthite-like glassy thin film
NASA Astrophysics Data System (ADS)
Hijazi, H.; Bannister, M. E.; Meyer, H. M.; Rouleau, C. M.; Meyer, F. W.
2017-07-01
In this paper, we present measurements of He+ and He+2 ion-induced sputtering of an anorthite-like thin film at a fixed solar wind-relevant impact energy of 0.5 keV/amu using a quartz crystal microbalance approach (QCM) for determination of total absolute sputtering yields. He+2 ions are the most abundant multicharged ions in the solar wind, and increased sputtering by these ions in comparison to equivelocity He+ ions is expected to have the biggest effect on the overall sputtering efficiency of solar wind impact on the Moon. Our measurements indicate an almost 70% increase of the sputtering yield for doubly charged incident He ions compared to that for same velocity He+ impact (14.6 amu/ion for He+2 vs. 8.7 amu/ion for He+). Using a selective sputtering model, the new QCM results presented here, together with previously published results for Ar+q ions and SRIM results for the relevant kinetic-sputtering yields, the effect due to multicharged-solar-wind-ion impact on local near-surface modification of lunar anorthite-like soil is explored. It is shown that the multicharged-solar-wind component leads to a more pronounced and significant differentiation of depleted and enriched surface elements as well as a shortening of the timescale over which such surface-compositional modifications might occur in astrophysical settings. In addition, to validate previous and future determinations of multicharged-ion-induced sputtering enhancement for those cases where the QCM approach cannot be used, relative quadrupole mass spectrometry (QMS)-based measurements are presented for the same anorthite-like thin film as were investigated by QCM, and their suitability and limitations for charge state-enhanced yield measurements are discussed.
Zhu, Xiang-cheng; Zhang, Zhen-ping; Zhang, Jun; Deng, Ai-xing; Zhang, Wei-jian
2016-02-01
The traditional rice growing practice has to change to save resource and protect environment, and it' s necessary to develop new technology in rice cultivation. Therefore, a two-year field experiment of Japonica rice (Liaoxing 1) was conducted in Northeast China in 2012 and 2013 to investigate the integrated effects of dense planting with less basal nitrogen (N) and unchanged top-dressing N (IR) on rice yield, N use efficiency (NUE) and greenhouse gas emissions. Compared with traditional practice (CK), we increased the rice seedling density by 33.3% and reduced the basal N rate by 20%. The results showed that the average N agronomy efficiency and partial factor productivity were improved by 49.6% (P<0.05) and 20.4% (P<0.05), respectively, while the area and yield-scaled greenhouse gas emissions were reduced by 9.9% and 12.7% (P<0.05), respectively. Although IR cropping mode decreased panicle number and biomass production, it significantly enhanced rice seed setting rate and harvest index, resulting in an unchanged or even highei yield. NH4+-N and NO3(-)-N concentrations in rice rhizosphere soil were reduced, resulting in an increment of N recovery efficiency. Generally, proper dense planting with less basal N applicatior could be a good approach for the trade-off between rice yield, NUE and greenhouse gas emission.
NASA Astrophysics Data System (ADS)
Moore, Frances C.; Baldos, Uris Lantz C.; Hertel, Thomas
2017-06-01
A large number of studies have been published examining the implications of climate change for agricultural productivity that, broadly speaking, can be divided into process-based modeling and statistical approaches. Despite a general perception that results from these methods differ substantially, there have been few direct comparisons. Here we use a data-base of yield impact studies compiled for the IPCC Fifth Assessment Report (Porter et al 2014) to systematically compare results from process-based and empirical studies. Controlling for differences in representation of CO2 fertilization between the two methods, we find little evidence for differences in the yield response to warming. The magnitude of CO2 fertilization is instead a much larger source of uncertainty. Based on this set of impact results, we find a very limited potential for on-farm adaptation to reduce yield impacts. We use the Global Trade Analysis Project (GTAP) global economic model to estimate welfare consequences of yield changes and find negligible welfare changes for warming of 1 °C-2 °C if CO2 fertilization is included and large negative effects on welfare without CO2. Uncertainty bounds on welfare changes are highly asymmetric, showing substantial probability of large declines in welfare for warming of 2 °C-3 °C even including the CO2 fertilization effect.
Can sub-Saharan Africa feed itself?
van Ittersum, Martin K.; van Bussel, Lenny G. J.; Wolf, Joost; Grassini, Patricio; van Wart, Justin; Guilpart, Nicolas; Claessens, Lieven; de Groot, Hugo; Wiebe, Keith; Yang, Haishun; Boogaard, Hendrik; van Oort, Pepijn A. J.; van Loon, Marloes P.; Saito, Kazuki; Adimo, Ochieng; Adjei-Nsiah, Samuel; Agali, Alhassane; Bala, Abdullahi; Chikowo, Regis; Kaizzi, Kayuki; Kouressy, Mamoutou; Makoi, Joachim H. J. R.; Ouattara, Korodjouma; Tesfaye, Kindie; Cassman, Kenneth G.
2016-01-01
Although global food demand is expected to increase 60% by 2050 compared with 2005/2007, the rise will be much greater in sub-Saharan Africa (SSA). Indeed, SSA is the region at greatest food security risk because by 2050 its population will increase 2.5-fold and demand for cereals approximately triple, whereas current levels of cereal consumption already depend on substantial imports. At issue is whether SSA can meet this vast increase in cereal demand without greater reliance on cereal imports or major expansion of agricultural area and associated biodiversity loss and greenhouse gas emissions. Recent studies indicate that the global increase in food demand by 2050 can be met through closing the gap between current farm yield and yield potential on existing cropland. Here, however, we estimate it will not be feasible to meet future SSA cereal demand on existing production area by yield gap closure alone. Our agronomically robust yield gap analysis for 10 countries in SSA using location-specific data and a spatial upscaling approach reveals that, in addition to yield gap closure, other more complex and uncertain components of intensification are also needed, i.e., increasing cropping intensity (the number of crops grown per 12 mo on the same field) and sustainable expansion of irrigated production area. If intensification is not successful and massive cropland land expansion is to be avoided, SSA will depend much more on imports of cereals than it does today. PMID:27956604
Can sub-Saharan Africa feed itself?
van Ittersum, Martin K; van Bussel, Lenny G J; Wolf, Joost; Grassini, Patricio; van Wart, Justin; Guilpart, Nicolas; Claessens, Lieven; de Groot, Hugo; Wiebe, Keith; Mason-D'Croz, Daniel; Yang, Haishun; Boogaard, Hendrik; van Oort, Pepijn A J; van Loon, Marloes P; Saito, Kazuki; Adimo, Ochieng; Adjei-Nsiah, Samuel; Agali, Alhassane; Bala, Abdullahi; Chikowo, Regis; Kaizzi, Kayuki; Kouressy, Mamoutou; Makoi, Joachim H J R; Ouattara, Korodjouma; Tesfaye, Kindie; Cassman, Kenneth G
2016-12-27
Although global food demand is expected to increase 60% by 2050 compared with 2005/2007, the rise will be much greater in sub-Saharan Africa (SSA). Indeed, SSA is the region at greatest food security risk because by 2050 its population will increase 2.5-fold and demand for cereals approximately triple, whereas current levels of cereal consumption already depend on substantial imports. At issue is whether SSA can meet this vast increase in cereal demand without greater reliance on cereal imports or major expansion of agricultural area and associated biodiversity loss and greenhouse gas emissions. Recent studies indicate that the global increase in food demand by 2050 can be met through closing the gap between current farm yield and yield potential on existing cropland. Here, however, we estimate it will not be feasible to meet future SSA cereal demand on existing production area by yield gap closure alone. Our agronomically robust yield gap analysis for 10 countries in SSA using location-specific data and a spatial upscaling approach reveals that, in addition to yield gap closure, other more complex and uncertain components of intensification are also needed, i.e., increasing cropping intensity (the number of crops grown per 12 mo on the same field) and sustainable expansion of irrigated production area. If intensification is not successful and massive cropland land expansion is to be avoided, SSA will depend much more on imports of cereals than it does today.
Feasibility of Image-Guided Transthoracic Core Needle Biopsy in the BATTLE Lung Trial
Tam, Alda L.; Kim, Edward S.; Lee, J. Jack; Ensor, Joe E.; Hicks, Marshall E.; Tang, Ximing; Blumenschein, George R.; Alden, Christine M.; Erasmus, Jeremy J.; Tsao, Anne; Lippman, Scott M.; Hong, Waun K.; Wistuba, Ignacio I.; Gupta, Sanjay
2013-01-01
Purpose As therapy for non-small cell lung cancer (NSCLC) patients becomes more personalized, additional tissue in the form of core needle biopsies (CNBs) for biomarker analysis is increasingly required for determining appropriate treatment and for enrollment into clinical trials. We report our experience with small-caliber percutaneous transthoracic (PT) CNBs for the evaluation of multiple molecular biomarkers in BATTLE (Biomarker-integrated Approaches of Targeted Therapy for Lung Cancer Elimination), a personalized, targeted therapy NSCLC clinical trial. Methods The medical records of patients who underwent PTCNB for consideration of enrollment in BATTLE, were reviewed for diagnostic yield of 11 predetermined molecular markers, and procedural complications. Univariate and multivariate analyses of factors related to patient and lesion characteristics were performed to determine possible influences on diagnostic yield. Results One hundred and seventy PTCNBs were performed using 20-gauge biopsy needles in 151 NSCLC patients screened for the trial. 82.9% of the biopsy specimens were found to have adequate tumor tissue for analysis of the required biomarkers. On multivariate analysis, metastatic lesions were 5.4 times more likely to yield diagnostic tissue as compared to primary tumors (p = 0.0079). Pneumothorax and chest tube insertion rates were 15.3% and 9.4%, respectively. Conclusions Image-guided 20-gauge PTCNB is safe and provides adequate tissue for analysis of multiple biomarkers in the majority of patients being considered for enrollment into a personalized, targeted therapy NSCLC clinical trial. Metastatic lesions are more likely to yield diagnostic tissue as compared to primary tumors. PMID:23442309
Willrodt, Christian; Hoschek, Anna; Bühler, Bruno; Schmid, Andreas; Julsing, Mattijs K
2016-06-01
The microbial production of isoprenoids has recently developed into a prime example for successful bottom-up synthetic biology or top-down systems biology strategies. Respective fermentation processes typically rely on growing recombinant microorganisms. However, the fermentative production of isoprenoids has to compete with cellular maintenance and growth for carbon and energy. Non-growing but metabolically active E. coli cells were evaluated in this study as alternative biocatalyst configurations to reduce energy and carbon loss towards biomass formation. The use of non-growing cells in an optimized fermentation medium resulted in more than fivefold increased specific limonene yields on cell dry weight and glucose, as compared to the traditional growing-cell-approach. Initially, the stability of the resting-cell activity was limited. This instability was overcome via the optimization of the minimal fermentation medium enabling high and stable limonene production rates for up to 8 h and a high specific yield of ≥50 mg limonene per gram cell dry weight. Omitting MgSO4 from the fermentation medium was very promising to prohibit growth and allow high productivities. Applying a MgSO4 -limitation also improved limonene formation by growing cells during non-exponential growth involving a reduced biomass yield on glucose and a fourfold increase in specific limonene yields on biomass as compared to non-limited cultures. The control of microbial growth via the medium composition was identified as a key but yet underrated strategy for efficient isoprenoid production. Biotechnol. Bioeng. 2016;113: 1305-1314. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Abelha, T F; Phillips, T W; Bannock, J H; Nightingale, A M; Dreiss, C A; Kemal, E; Urbano, L; deMello, J C; Green, M; Dailey, L A
2017-02-02
This study compares the performance of a microfluidic technique and a conventional bulk method to manufacture conjugated polymer nanoparticles (CPNs) embedded within a biodegradable poly(ethylene glycol) methyl ether-block-poly(lactide-co-glycolide) (PEG 5K -PLGA 55K ) matrix. The influence of PEG 5K -PLGA 55K and conjugated polymers cyano-substituted poly(p-phenylene vinylene) (CN-PPV) and poly(9,9-dioctylfluorene-2,1,3-benzothiadiazole) (F8BT) on the physicochemical properties of the CPNs was also evaluated. Both techniques enabled CPN production with high end product yields (∼70-95%). However, while the bulk technique (solvent displacement) under optimal conditions generated small nanoparticles (∼70-100 nm) with similar optical properties (quantum yields ∼35%), the microfluidic approach produced larger CPNs (140-260 nm) with significantly superior quantum yields (49-55%) and tailored emission spectra. CPNs containing CN-PPV showed smaller size distributions and tuneable emission spectra compared to F8BT systems prepared under the same conditions. The presence of PEG 5K -PLGA 55K did not affect the size or optical properties of the CPNs and provided a neutral net electric charge as is often required for biomedical applications. The microfluidics flow-based device was successfully used for the continuous preparation of CPNs over a 24 hour period. On the basis of the results presented here, it can be concluded that the microfluidic device used in this study can be used to optimize the production of bright CPNs with tailored properties with good reproducibility.
Brain Stem Cavernous Malformations: Operative Nuances of a Less-Invasive Resection Technique.
Singh, Harminder; Elarjani, Turki; da Silva, Harley Brito; Shetty, Rakshith; Kim, Louis; Sekhar, Laligam N
2017-12-08
Different operative techniques are reported for the resection of brainstem cavernous malformations (BSCMs). The senior author has previously reported on a less-invasive technique of entering the brain stem with piecemeal removal of BSCMs, especially the deep-seated ones. To present a larger series of these lesions, emphasizing the approach to the brain stem via case selection. We discuss the nuances of the less-invasive operative technique through case illustrations and intraoperative videos. A retrospective review of 46 consecutive cases of BSCMs, with their clinical and radiographic data, was performed. Nine cases were selected to illustrate 7 different operative approaches, and discuss surgical nuances of the less-invasive technique unique to each. Postoperative morbidity, defined as an increase in modified Rankin Scale, was observed in 5 patients (10.9%). A residual BSCM was present in 2 patients (4.3%); both underwent reoperation to remove the remainder. At follow-up of 31.1 ± 27.8 mo, 3 patients experienced recurrence (6.5%). Overall, 65% of our patients improved, 20% stayed the same, and 11% worsened postsurgery. Two patients died, yielding a mortality of 4.3%. Using the less-invasive resection technique for piecemeal BSCM removal, in appropriately selected patients, has yielded comparable to improved patient outcomes over existing large series. In our experience, lateral, anterolateral, and posterolateral approaches are favorable over direct midline (dorsal or ventral) approaches. A thorough understanding of brain-stem safe-entry zones, in conjunction with appropriate approach selection, is key to a good outcome in challenging cases. Copyright © 2017 by the Congress of Neurological Surgeons
2010-01-01
Background There are growing concerns regarding inequities in health, with poverty being an important determinant of health as well as a product of health status. Within the People's Republic of China (P.R. China), disparities in socio-economic position are apparent, with the rural-urban gap of particular concern. Our aim was to compare direct and proxy methods of estimating household wealth in a rural and a peri-urban setting of Hunan province, P.R. China. Methods We collected data on ownership of household durable assets, housing characteristics, and utility and sanitation variables in two village-wide surveys in Hunan province. We employed principal components analysis (PCA) and principal axis factoring (PAF) to generate household asset-based proxy wealth indices. Households were grouped into quartiles, from 'most wealthy' to 'most poor'. We compared the estimated household wealth for each approach. Asset-based proxy wealth indices were compared to those based on self-reported average annual income and savings at the household level. Results Spearman's rank correlation analysis revealed that PCA and PAF yielded similar results, indicating that either approach may be used for estimating household wealth. In both settings investigated, the two indices were significantly associated with self-reported average annual income and combined income and savings, but not with savings alone. However, low correlation coefficients between the proxy and direct measures of wealth indicated that they are not complementary. We found wide disparities in ownership of household durable assets, and utility and sanitation variables, within and between settings. Conclusion PCA and PAF yielded almost identical results and generated robust proxy wealth indices and categories. Pooled data from the rural and peri-urban settings highlighted structural differences in wealth, most likely a result of localized urbanization and modernization. Further research is needed to improve measurements of wealth in low-income and transitional country contexts. PMID:20813070
Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.
2012-01-01
A Monte Carlo simulation was conducted to investigate the robustness of four latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of non-normality of the observed exogenous variables. Results showed that the CPI and LMS approaches yielded biased estimates of the interaction effect when the exogenous variables were highly non-normal. When the violation of non-normality was not severe (normal; symmetric with excess kurtosis < 1), the LMS approach yielded the most efficient estimates of the latent interaction effect with the highest statistical power. In highly non-normal conditions, the GAPI and UPI approaches with ML estimation yielded unbiased latent interaction effect estimates, with acceptable actual Type-I error rates for both the Wald and likelihood ratio tests of interaction effect at N ≥ 500. An empirical example illustrated the use of the four approaches in testing a latent variable interaction between academic self-efficacy and positive family role models in the prediction of academic performance. PMID:23457417
A comparison of approaches for estimating relative impacts of nonnative fishes
Lapointe, N.W.R.; Pendleton, R. M.; Angermeier, Paul
2012-01-01
Lack of standard methods for quantifying impact has hindered risk assessments of high-impact invaders. To understand methodological strengths and weaknesses, we compared five approaches (in parentheses) for quantifying impact of nonnative fishes: reviewing documented impacts in a large-scale database (review); surveying fish biologists regarding three categories of impact (socioeconomic, ecological, abundance); and estimating frequency of occurrence from existing collection records (collection). In addition, we compared game and nongame biologists’ ratings of game and nongame species. Although mean species ratings were generally correlated among approaches, we documented important discrepancies. The review approach required little effort but often inaccurately estimated impact in our study region (Mid-Atlantic United States). Game fishes received lower ratings from the socioeconomic approach, which yielded the greatest consistency among respondents. The ecological approach exhibited lower respondent bias but was sensitive to pre-existing perceptions of high-impact invaders. The abundance approach provided the least-biased assessment of region-specific impact but did not account for differences in per-capita effects among species. The collection approach required the most effort and did not provide reliable estimates of impact. Multiple approaches to assessing a species’ impact are instructive, but impact ratings must be interpreted in the context of methodological strengths and weaknesses and key management issues. A combination of our ecological and abundance approaches may be most appropriate for assessing ecological impact, whereas our socioeconomic approach is more useful for understanding social dimensions. These approaches are readily transferrable to other regions and taxa; if refined, they can help standardize the assessment of impacts of nonnative species.
NASA Astrophysics Data System (ADS)
Teng, W. L.; Shannon, H. D.
2011-12-01
The USDA World Agricultural Outlook Board (WAOB) is responsible for monitoring weather and climate impacts on domestic and foreign crop development. One of WAOB's primary goals is to determine the net cumulative effect of weather and climate anomalies on final crop yields. To this end, a broad array of information is consulted, including maps, charts, and time series of recent weather, climate, and crop observations; numerical output from weather and crop models; and reports from the press, USDA attachés, and foreign governments. The resulting agricultural weather assessments are published in the Weekly Weather and Crop Bulletin, to keep farmers, policy makers, and commercial agricultural interests informed of weather and climate impacts on agriculture. Because both the amount and timing of precipitation significantly impact crop yields, WAOB often uses precipitation time series to identify growing seasons with similar weather patterns and help estimate crop yields for the current growing season, based on observed yields in analog years. Although, historically, these analog years are identified through visual inspection, the qualitative nature of this methodology sometimes precludes the definitive identification of the best analog year. One goal of this study is to introduce a more rigorous, statistical approach for identifying analog years. This approach is based on a modified coefficient of determination, termed the analog index (AI). The derivation of AI will be described. Another goal of this study is to compare the performance of AI for time series derived from surface-based observations vs. satellite-based measurements (NASA TRMM and other data). Five study areas and six growing seasons of data were analyzed (2003-2007 as potential analog years and 2008 as the target year). Results thus far show that, for all five areas, crop yield estimates derived from satellite-based precipitation data are closer to measured yields than are estimates derived from surface-based precipitation measurements. Work is continuing to include satellite-based surface soil moisture data and model-assimilated root zone soil moisture. This study is part of a larger effort to improve WAOB estimates by integrating NASA remote sensing observations and research results into WAOB's decision-making environment.
Emerging from the bottleneck: Benefits of the comparative approach to modern neuroscience
Brenowitz, Eliot A.; Zakon, Harold H.
2015-01-01
Neuroscience historically exploited a wide diversity of animal taxa. Recently, however, research focused increasingly on a few model species. This trend accelerated with the genetic revolution, as genomic sequences and genetic tools became available for a few species, which formed a bottleneck. This coalescence on a small set of model species comes with several costs often not considered, especially in the current drive to use mice explicitly as models for human diseases. Comparative studies of strategically chosen non-model species can complement model species research and yield more rigorous studies. As genetic sequences and tools become available for many more species, we are poised to emerge from the bottleneck and once again exploit the rich biological diversity offered by comparative studies. PMID:25800324
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
Feedback Implementation of Zermelo's Optimal Control by Sugeno Approximation
NASA Technical Reports Server (NTRS)
Clifton, C.; Homaifax, A.; Bikdash, M.
1997-01-01
This paper proposes an approach to implement optimal control laws of nonlinear systems in real time. Our methodology does not require solving two-point boundary value problems online and may not require it off-line either. The optimal control law is learned using the original Sugeno controller (OSC) from a family of optimal trajectories. We compare the trajectories generated by the OSC and the trajectories yielded by the optimal feedback control law when applied to Zermelo's ship steering problem.
Lattice Strain Due to an Atomic Vacancy
Li, Shidong; Sellers, Michael S.; Basaran, Cemal; Schultz, Andrew J.; Kofke, David A.
2009-01-01
Volumetric strain can be divided into two parts: strain due to bond distance change and strain due to vacancy sources and sinks. In this paper, efforts are focused on studying the atomic lattice strain due to a vacancy in an FCC metal lattice with molecular dynamics simulation (MDS). The result has been compared with that from a continuum mechanics method. It is shown that using a continuum mechanics approach yields constitutive results similar to the ones obtained based purely on molecular dynamics considerations. PMID:19582230
Miniature integrated-optical wavelength analyzer chip
NASA Astrophysics Data System (ADS)
Kunz, R. E.; Dübendorfer, J.
1995-11-01
A novel integrated-optical chip suitable for realizing compact miniature wavelength analyzers with high linear dispersion is presented. The chip performs the complete task of converting the spectrum of an input beam into a corresponding spatial irradiance distribution without the need for an imaging function. We demonstrate the feasibility of this approach experimentally by monitoring the changes in the mode spectrum of a laser diode on varying its case temperature. Comparing the results with simultaneous measurements by a commercial spectrometer yielded a rms wavelength deviation of 0.01 nm.
One-pot synthesis of 4,8-dibromobenzo[1,2-c;4,5-c']bis[1,2,5]thiadiazole.
Tam, Teck Lip; Li, Hairong; Wei, Fengxia; Tan, Ke Jie; Kloc, Christian; Lam, Yeng Ming; Mhaisalkar, Subodh G; Grimsdale, Andrew C
2010-08-06
A one-step synthesis of 4,8-dibromobenzo[1,2-c;4,5-c']bis[1,2,5]thiadiazole with use of 1,2,4,5-tetraaminobenzene tetrahydrobromide and thionyl bromide in good yield is reported. This unit can then be used in the synthesis of low bandgap materials via palladium-catalyzed coupling reactions. The approach offers a quick and easy way to prepare low bandgap materials as compared to the current literature methods.
NASA Astrophysics Data System (ADS)
Fieuzal, R.; Marais Sicre, C.; Baup, F.
2017-05-01
The yield forecasting of corn constitutes a key issue in agricultural management, particularly in the context of demographic pressure and climate change. This study presents two methods to estimate yields using artificial neural networks: a diagnostic approach based on all the satellite data acquired throughout the agricultural season, and a real-time approach, where estimates are updated after each image was acquired in the microwave and optical domains (Formosat-2, Spot-4/5, TerraSAR-X, and Radarsat-2) throughout the crop cycle. The results are based on the Multispectral Crop Monitoring experimental campaign conducted by the CESBIO (Centre d'Études de la BIOsphère) laboratory in 2010 over an agricultural region in southwestern France. Among the tested sensor configurations (multi-frequency, multi-polarization or multi-source data), the best yield estimation performance (using the diagnostic approach) is obtained with reflectance acquired in the red wavelength region, with a coefficient of determination of 0.77 and an RMSE of 6.6 q ha-1. In the real-time approach the combination of red reflectance and CHH backscattering coefficients provides the best compromise between the accuracy and earliness of the yield estimate (more than 3 months before the harvest), with an R2 of 0.69 and an RMSE of 7.0 q ha-1 during the development of the central stem. The two best yield estimates are similar in most cases (for more than 80% of the monitored fields), and the differences are related to discrepancies in the crop growth cycle and/or the consequences of pests.
a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
He, H.; Khoshelham, K.; Fraser, C.
2017-09-01
Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.
A link prediction approach to cancer drug sensitivity prediction.
Turki, Turki; Wei, Zhi
2017-10-03
Predicting the response to a drug for cancer disease patients based on genomic information is an important problem in modern clinical oncology. This problem occurs in part because many available drug sensitivity prediction algorithms do not consider better quality cancer cell lines and the adoption of new feature representations; both lead to the accurate prediction of drug responses. By predicting accurate drug responses to cancer, oncologists gain a more complete understanding of the effective treatments for each patient, which is a core goal in precision medicine. In this paper, we model cancer drug sensitivity as a link prediction, which is shown to be an effective technique. We evaluate our proposed link prediction algorithms and compare them with an existing drug sensitivity prediction approach based on clinical trial data. The experimental results based on the clinical trial data show the stability of our link prediction algorithms, which yield the highest area under the ROC curve (AUC) and are statistically significant. We propose a link prediction approach to obtain new feature representation. Compared with an existing approach, the results show that incorporating the new feature representation to the link prediction algorithms has significantly improved the performance.
Kanojia, Gaurav; Willems, Geert-Jan; Frijlink, Henderik W; Kersten, Gideon F A; Soema, Peter C; Amorij, Jean-Pierre
2016-09-25
Spray dried vaccine formulations might be an alternative to traditional lyophilized vaccines. Compared to lyophilization, spray drying is a fast and cheap process extensively used for drying biologicals. The current study provides an approach that utilizes Design of Experiments for spray drying process to stabilize whole inactivated influenza virus (WIV) vaccine. The approach included systematically screening and optimizing the spray drying process variables, determining the desired process parameters and predicting product quality parameters. The process parameters inlet air temperature, nozzle gas flow rate and feed flow rate and their effect on WIV vaccine powder characteristics such as particle size, residual moisture content (RMC) and powder yield were investigated. Vaccine powders with a broad range of physical characteristics (RMC 1.2-4.9%, particle size 2.4-8.5μm and powder yield 42-82%) were obtained. WIV showed no significant loss in antigenicity as revealed by hemagglutination test. Furthermore, descriptive models generated by DoE software could be used to determine and select (set) spray drying process parameter. This was used to generate a dried WIV powder with predefined (predicted) characteristics. Moreover, the spray dried vaccine powders retained their antigenic stability even after storage for 3 months at 60°C. The approach used here enabled the generation of a thermostable, antigenic WIV vaccine powder with desired physical characteristics that could be potentially used for pulmonary administration. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Agarwal, R; Khan, A; Aggarwal, A N; Gupta, D
2009-12-01
The combination of inhaled corticosteroids (ICS) and long-acting beta2 agonists (LABA) has been used as a single inhaler both for maintenance and reliever therapy in asthma, the SMART approach. The administration of additional CS with each reliever inhalation in response to symptoms is expected to provide better control of airway inflammation. The aim of this meta-analysis was to evaluate the efficacy and safety of the SMART approach versus other approaches in the management of asthma in preventing asthma exacerbations. We searched the MEDLINE and EMBASE databases for studies that have reported exacerbations in the SMART group versus the control group. We calculated the odds ratio (OR) and 95% confidence intervals (CI) to assess the exacerbations in the two groups and pooled the results using a random-effects model. Our search yielded eight studies. The use of SMART approach compared to fixed-dose ICS-LABA combination significantly decreased the odds of a severe exacerbation (OR 0.65; 95% CI, 0.53-0.80) and severe exacerbation requiring hospitalization/ER treatment (OR 0.69; 95% CI, 058-0.83). The use of SMART approach compared to fixed-dose ICS also significantly decreased the odds of a severe exacerbation (OR 0.52; 95% CI, 0.45-0.61) and severe exacerbation requiring medical intervention (OR 0.52; 95% CI, 0.42-0.65). The occurrence of adverse events was similar in the two groups. There was some evidence of statistical heterogeneity. The SMART approach using formoterol-budesonide is superior in preventing exacerbations when compared to traditional therapy with fixed dose ICS or ICS-LABA combination without any increase in adverse events.
The MICRO-BOSS scheduling system: Current status and future efforts
NASA Technical Reports Server (NTRS)
Sadeh, Norman M.
1992-01-01
In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory. Current research efforts include: adaptation of MICRO-BOSS to deal with sequence-dependent setups and development of micro-opportunistic reactive scheduling techniques that will enable the system to patch the schedule in the presence of contingencies such as machine breakdowns, raw materials arriving late, job cancellations, etc.
Putting the psychology back into psychological models: mechanistic versus rational approaches.
Sakamoto, Yasuaki; Jones, Mattr; Love, Bradley C
2008-09-01
Two basic approaches to explaining the nature of the mind are the rational and the mechanistic approaches. Rational analyses attempt to characterize the environment and the behavioral outcomes that humans seek to optimize, whereas mechanistic models attempt to simulate human behavior using processes and representations analogous to those used by humans. We compared these approaches with regard to their accounts of how humans learn the variability of categories. The mechanistic model departs in subtle ways from rational principles. In particular, the mechanistic model incrementally updates its estimates of category means and variances through error-driven learning, based on discrepancies between new category members and the current representation of each category. The model yields a prediction, which we verify, regarding the effects of order manipulations that the rational approach does not anticipate. Although both rational and mechanistic models can successfully postdict known findings, we suggest that psychological advances are driven primarily by consideration of process and representation and that rational accounts trail these breakthroughs.
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
Gupta, Manoj Kumar; Vadde, Ramakrishna; Donde, Ravindra; Gouda, Gayatri; Kumar, Jitendra; Nayak, Subhashree; Jena, Mayabini; Behera, Lambodar
2018-05-02
Brown plant hopper (BPH) is one of the major destructive insect pests of rice, causing severe yield loss. Thirty-two BPH resistance genes have been identified in cultivated and wild species of rice Although, molecular mechanism of rice plant resistance against BPH studied through map-based cloning, due to non-existence of NMR/crystal structures of Bph14 protein, recognition of leucine-rich repeat (LRR) domain and its interaction with different ligands are poorly understood. Thus, in the present study, in silico approach was adopted to predict three-dimensional structure of LRR domain of Bph14 using comparative modelling approach followed by interaction study with jasmonic and salicylic acids. LRR domain along with LRR-jasmonic and salicylic acid complexes were subjected to dynamic simulation using GROMACS, individually, for energy minimisation and refinement of the structure. Final binding energy of jasmonic and salicylic acid with LRR domain was calculated using MM/PBSA. Free-energy landscape analysis revealed that overall stability of LRR domain of Bph14 is not much affected after forming complex with jasmonic and salicylic acid. MM/PBSA analysis revealed that binding affinities of LRR domain towards salicylic acid is higher as compared to jasmonic acid. Interaction study of LRR domain with salicylic acid and jasmonic acid reveals that THR987 of LRR form hydrogen bond with both complexes. Thus, THR987 plays active role in the Bph14 and phytochemical interaction for inducing resistance in rice plant against BPH. In future, Bph14 gene and phytochemicals could be used in BPH management and development of novel resistant varieties for increasing rice yield.
Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell.
Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A
2016-01-01
Thanks to recent developments in additive manufacturing techniques, it is now possible to fabricate porous biomaterials with arbitrarily complex micro-architectures. Micro-architectures of such biomaterials determine their physical and biological properties, meaning that one could potentially improve the performance of such biomaterials through rational design of micro-architecture. The relationship between the micro-architecture of porous biomaterials and their physical and biological properties has therefore received increasing attention recently. In this paper, we studied the mechanical properties of porous biomaterials made from a relatively unexplored unit cell, namely rhombicuboctahedron. We derived analytical relationships that relate the micro-architecture of such porous biomaterials, i.e. the dimensions of the rhombicuboctahedron unit cell, to their elastic modulus, Poisson's ratio, and yield stress. Finite element models were also developed to validate the analytical solutions. Analytical and numerical results were compared with experimental data from one of our recent studies. It was found that analytical solutions and numerical results show a very good agreement particularly for smaller values of apparent density. The elastic moduli predicted by analytical and numerical models were in very good agreement with experimental observations too. While in excellent agreement with each other, analytical and numerical models somewhat over-predicted the yield stress of the porous structures as compared to experimental data. As the ratio of the vertical struts to the inclined struts, α, approaches zero and infinity, the rhombicuboctahedron unit cell respectively approaches the octahedron (or truncated cube) and cube unit cells. For those limits, the analytical solutions presented here were found to approach the analytic solutions obtained for the octahedron, truncated cube, and cube unit cells, meaning that the presented solutions are generalizations of the analytical solutions obtained for several other types of porous biomaterials. Copyright © 2015 Elsevier Ltd. All rights reserved.
Moyle, Peter M; Dai, Wei; Zhang, Yingkai; Batzloff, Michael R; Good, Michael F; Toth, Istvan
2014-05-21
Subunit vaccines offer a means to produce safer, more defined vaccines compared to traditional whole microorganism approaches. Subunit antigens, however, exhibit weak immunity, which is normally overcome through coadministration with adjuvants. Enhanced vaccine properties (e.g., improved potency) can be obtained by linking antigen and adjuvant, as observed for synthetic peptide antigens and Toll-like receptor 2 (TLR2) ligands. As few protective peptide antigens have been reported, compared to protein antigens, we sought to extend the utility of this approach to recombinant proteins, while ensuring that conjugation reactions yielded a single, molecularly defined product. Herein we describe the development and optimization of techniques that enable the efficient, site-specific attachment of three synthetic TLR2 ligands (lipid core peptide (LCP), Pam2Cys, and Pam3Cys) onto engineered protein antigens, permitting the selection of optimal TLR2 agonists during the vaccine development process. Using this approach, broadly protective (J14) and population targeted (seven M protein N-terminal antigens) multiantigenic vaccines against group A streptococcus (GAS; Streptococcus pyogenes) were produced and observed to self-assemble in PBS to yield nanoparticules (69, 101, and 123 nm, respectively). All nanoparticle formulations exhibited self-adjuvanting properties, with rapid, persistent, antigen-specific IgG antibody responses elicited toward each antigen in subcutaneously immunized C57BL/6J mice. These antibodies were demonstrated to strongly bind to the cell surface of five GAS serotypes that are not represented by vaccine M protein N-terminal antigens, are among the top 20 circulating strains in developed countries, and are associated with clinical disease, suggesting that these vaccines may elicit broadly protective immune responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souza, Jonas A. de, E-mail: jdesouza@medicine.bsd.uchicago.edu; Santana, Iuri A.; Castro, Gilberto de
The purpose of this review was to describe cost-effectiveness and cost analysis studies across treatment modalities for squamous cell carcinoma of the head and neck (SCCHN), while placing their results in context of the current clinical practice. We performed a literature search in PubMed for English-language studies addressing economic analyses of treatment modalities for SCCHN published from January 2000 to March 2013. We also performed an additional search for related studies published by the National Institute for Health and Clinical Excellence in the United Kingdom. Identified articles were classified into 3 clinical approaches (organ preservation, radiation therapy modalities, and chemotherapy regimens)more » and into 2 types of economic studies (cost analysis and cost-effectiveness/cost-utility studies). All cost estimates were normalized to US dollars, year 2013 values. Our search yielded 23 articles: 13 related to organ preservation approaches, 5 to radiation therapy modalities, and 5 to chemotherapy regimens. In general, studies analyzed different questions and modalities, making it difficult to reach a conclusion. Even when restricted to comparisons of modalities within the same clinical approach, studies often yielded conflicting findings. The heterogeneity across economic studies of SCCHN should be carefully understood in light of the modeling assumptions and limitations of each study and placed in context with relevant settings of clinical practices and study perspectives. Furthermore, the scarcity of comparative effectiveness and quality-of-life data poses unique challenges for conducting economic analyses for a resource-intensive disease, such as SCCHN, that requires a multimodal care. Future research is needed to better understand how to compare the costs and cost-effectiveness of different modalities for SCCHN.« less
NASA Astrophysics Data System (ADS)
Pietropolli Charmet, Andrea; Stoppa, Paolo; Tasinato, Nicola; Giorgianni, Santi
2017-05-01
This work presents a benchmark study on the calculation of the sextic centrifugal distortion constants employing cubic force fields computed by means of density functional theory (DFT). For a set of semi-rigid halogenated organic compounds several functionals (B2PLYP, B3LYP, B3PW91, M06, M06-2X, O3LYP, X3LYP, ωB97XD, CAM-B3LYP, LC-ωPBE, PBE0, B97-1 and B97-D) were used for computing the sextic centrifugal distortion constants. The effects related to the size of basis sets and the performances of hybrid approaches, where the harmonic data obtained at higher level of electronic correlation are coupled with cubic force constants yielded by DFT functionals, are presented and discussed. The predicted values were compared to both the available data published in the literature and those obtained by calculations carried out at increasing level of electronic correlation: Hartree-Fock Self Consistent Field (HF-SCF), second order Møller-Plesset perturbation theory (MP2), and coupled-cluster single and double (CCSD) level of theory. Different hybrid approaches, having the cubic force field computed at DFT level of theory coupled to harmonic data computed at increasing level of electronic correlation (up to CCSD level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T)) were considered. The obtained results demonstrate that they can represent reliable and computationally affordable methods to predict sextic centrifugal terms with an accuracy almost comparable to that yielded by the more expensive anharmonic force fields fully computed at MP2 and CCSD levels of theory. In view of their reduced computational cost, these hybrid approaches pave the route to the study of more complex systems.
NASA Astrophysics Data System (ADS)
Jaensch, Stefan; Merk, Malte; Emmert, Thomas; Polifke, Wolfgang
2018-05-01
The Large Eddy Simulation/System Identification (LES/SI) approach is a general and efficient numerical method for deducing a Flame Transfer Function (FTF) from the LES of turbulent reacting flow. The method may be summarised as follows: a simulated flame is forced with a broadband excitation signal. The resulting fluctuations of the reference velocity and of the global heat release rate are post-processed via SI techniques in order to estimate a low-order model of the flame dynamics. The FTF is readily deduced from the low-order model. The SI method most frequently applied in aero- and thermo-acoustics has been Wiener-Hopf Inversion (WHI). This method is known to yield biased estimates in situations with feedback, thus it was assumed that non-reflective boundary conditions are required to generate accurate results with the LES/SI approach. Recent research has shown that the FTF is part of the so-called Intrinsic ThermoAcoustic (ITA) feedback loop. Hence, identifying an FTF from a compressible LES is always a closed-loop problem, and consequently one should expect that the WHI would yield biased results. However, several studies proved that WHI results compare favourably with validation data. To resolve this apparent contradiction, a variety of identification methods are compared against each other, including models designed for closed-loop identification. In agreement with theory, we show that the estimate given by WHI does not converge to the actual FTF. Fortunately, the error made is small if excitation amplitudes can be set such that the signal-to-noise ratio is large, but not large enough to trigger nonlinear flame dynamics. Furthermore, we conclude that non-reflective boundary conditions are not essentially necessary to apply the LES/SI approach.
Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata
2018-05-09
Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.
Valluru, Ravi; Reynolds, Matthew P; Salse, Jerome
2014-07-01
Transferring the knowledge bases between related species may assist in enlarging the yield potential of crop plants. Being cereals, rice and wheat share a high level of gene conservation; however, they differ at metabolic levels as a part of the environmental adaptation resulting in different yield capacities. This review focuses on the current understanding of genetic and molecular regulation of yield-associated traits in both crop species, highlights the similarities and differences and presents the putative knowledge gaps. We focus on the traits associated with phenology, photosynthesis, and assimilate partitioning and lodging resistance; the most important drivers of yield potential. Currently, there are large knowledge gaps in the genetic and molecular control of such major biological processes that can be filled in a translational biology approach in transferring genomics and genetics informations between rice and wheat.
Predicting yields for autotrophic and cometabolic processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, G.
1995-12-31
The goal of bioprocess engineering is to state how the optimum design and control strategy for a bioprocess follow from the metabolism of the particular microorganism. A necessary step toward this goal is to show how the parameters used in quantitative descriptions of a process (e.g., yield and maintenance coefficients) are related to those describing the metabolism [e.g., Y{sub ATP}, (P/O)]. The {open_quotes}yield equation{close_quotes} approach to this problem involves dividing metabolism into the separate pathways for catabolism, anabolism, respiration, and product formation and balancing the production and consumption of reducing equivalents and ATP. The general approach, demonstrated previously for heterotrophicmore » cell growth and products of fermentation, is illustrated by three new examples: the cell yield for chemoautotrophic iron-oxidizing bacteria, the cometabolic degradation of chloroform by methanotrophic bacteria, and the theoretical yield of succinic acid from glucose.« less
A Bio-Catalytic Approach to Aliphatic Ketones
Xiong, Mingyong; Deng, Jin; Woodruff, Adam P.; Zhu, Minshan; Zhou, Jun; Park, Sun Wook; Li, Hui; Fu, Yao; Zhang, Kechun
2012-01-01
Depleting oil reserves and growing environmental concerns have necessitated the development of sustainable processes to fuels and chemicals. Here we have developed a general metabolic platform in E. coli to biosynthesize carboxylic acids. By engineering selectivity of 2-ketoacid decarboxylases and screening for promiscuous aldehyde dehydrogenases, synthetic pathways were constructed to produce both C5 and C6 acids. In particular, the production of isovaleric acid reached 32 g/L (0.22 g/g glucose yield), which is 58% of the theoretical yield. Furthermore, we have developed solid base catalysts to efficiently ketonize the bio-derived carboxylic acids such as isovaleric acid and isocaproic acid into high volume industrial ketones: methyl isobutyl ketone (MIBK, yield 84%), diisobutyl ketone (DIBK, yield 66%) and methyl isoamyl ketone (MIAK, yield 81%). This hybrid “Bio-Catalytic conversion” approach provides a general strategy to manufacture aliphatic ketones, and represents an alternate route to expanding the repertoire of renewable chemicals. PMID:22416247
A bio-catalytic approach to aliphatic ketones.
Xiong, Mingyong; Deng, Jin; Woodruff, Adam P; Zhu, Minshan; Zhou, Jun; Park, Sun Wook; Li, Hui; Fu, Yao; Zhang, Kechun
2012-01-01
Depleting oil reserves and growing environmental concerns have necessitated the development of sustainable processes to fuels and chemicals. Here we have developed a general metabolic platform in E. coli to biosynthesize carboxylic acids. By engineering selectivity of 2-ketoacid decarboxylases and screening for promiscuous aldehyde dehydrogenases, synthetic pathways were constructed to produce both C5 and C6 acids. In particular, the production of isovaleric acid reached 32 g/L (0.22 g/g glucose yield), which is 58% of the theoretical yield. Furthermore, we have developed solid base catalysts to efficiently ketonize the bio-derived carboxylic acids such as isovaleric acid and isocaproic acid into high volume industrial ketones: methyl isobutyl ketone (MIBK, yield 84%), diisobutyl ketone (DIBK, yield 66%) and methyl isoamyl ketone (MIAK, yield 81%). This hybrid "Bio-Catalytic conversion" approach provides a general strategy to manufacture aliphatic ketones, and represents an alternate route to expanding the repertoire of renewable chemicals.
Crystal plasticity assisted prediction on the yield locus evolution and forming limit curves
NASA Astrophysics Data System (ADS)
Lian, Junhe; Liu, Wenqi; Shen, Fuhui; Münstermann, Sebastian
2017-10-01
The aim of this study is to predict the plastic anisotropy evolution and its associated forming limit curves of bcc steels purely based on their microstructural features by establishing an integrated multiscale modelling approach. Crystal plasticity models are employed to describe the micro deformation mechanism and correlate the microstructure with mechanical behaviour on micro and mesoscale. Virtual laboratory is performed considering the statistical information of the microstructure, which serves as the input for the phenomenological plasticity model on the macroscale. For both scales, the microstructure evolution induced evolving features, such as the anisotropic hardening, r-value and yield locus evolution are seamlessly integrated. The predicted plasticity behaviour by the numerical simulations are compared with experiments. These evolutionary features of the material deformation behaviour are eventually considered for the prediction of formability.
CMOS-compatible batch processing of monolayer MoS2 MOSFETs
NASA Astrophysics Data System (ADS)
Xiong, Kuanchen; Kim, Hyun; Marstell, Roderick J.; Göritz, Alexander; Wipf, Christian; Li, Lei; Park, Ji-Hoon; Luo, Xi; Wietstruck, Matthias; Madjar, Asher; Strandwitz, Nicholas C.; Kaynak, Mehmet; Lee, Young Hee; Hwang, James C. M.
2018-04-01
Thousands of high-performance 2D metal-oxide-semiconductor field effect transistors (MOSFETs) were fabricated on wafer-scale chemical vapor deposited MoS2 with fully-CMOS-compatible processes such as photolithography and aluminum metallurgy. The yield was greater than 50% in terms of effective gate control with less-than-10 V threshold voltage, even for MOSFETs having deep-submicron gate length. The large number of fabricated MOSFETs allowed statistics to be gathered and the main yield limiter to be attributed to the weak adhesion between the transferred MoS2 and the substrate. With cut-off frequencies approaching the gigahertz range, the performances of the MOSFETs were comparable to that of state-of-the-art MoS2 MOSFETs, whether the MoS2 was grown by a thin-film process or exfoliated from a bulk crystal.
Crop biomass and evapotranspiration estimation using SPOT and Formosat-2 Data
NASA Astrophysics Data System (ADS)
Veloso, Amanda; Demarez, Valérie; Ceschia, Eric; Claverie, Martin
2013-04-01
The use of crop models allows simulating plant development, growth and yield under different environmental and management conditions. When combined with high spatial and temporal resolution remote sensing data, these models provide new perspectives for crop monitoring at regional scale. We propose here an approach to estimate time courses of dry aboveground biomass, yield and evapotranspiration (ETR) for summer (maize, sunflower) and winter crops (wheat) by assimilating Green Area Index (GAI) data, obtained from satellite observations, into a simple crop model. Only high spatial resolution and gap-free satellite time series can provide enough information for efficient crop monitoring applications. The potential of remote sensing data is often limited by cloud cover and/or gaps in observation. Data from different sensor systems need then to be combined. For this work, we employed a unique set of Formosat-2 and SPOT images (164 images) and in-situ measurements, acquired from 2006 to 2010 in southwest France. Among the several land surface biophysical variables accessible from satellite observations, the GAI is the one that has a key role in soil-plant-atmosphere interactions and in biomass accumulation process. Many methods have been developed to relate GAI to optical remote sensing signal. Here, seasonal dynamics of remotely sensed GAI were estimated by applying a method based on the inversion of a radiative transfer model using artificial neural networks. The modelling approach is based on the Simple Algorithm for Yield and Evapotranspiration estimate (SAFYE) model, which couples the FAO-56 model with an agro-meteorological model, based on Monteith's light-use efficiency theory. The SAFYE model is a daily time step crop model that simulates time series of GAI, dry aboveground biomass, grain yield and ETR. Crop and soil model parameters were determined using both in-situ measurements and values found in the literature. Phenological parameters were calibrated by the assimilation of the remotely sensed GAI time series. The calibration process led to accurate spatial estimates of GAI, ETR as well as of biomass and yield over the study area (24 km x 24 km window). The results highlight the interest of using a combined approach (crop model coupled with high spatial and temporal resolution remote sensing data) for the estimation of agronomical variables. At local scale, the model reproduced correctly the biomass production and ETR for summer crops (with relative RMSE of 29% and 35%, respectively). At regional scale, estimated yield and water requirement for irrigation were compared to regional statistics of yield and irrigation inventories provided by the local water agency. Results showed good agreements for inter-annual dynamics of yield estimates. Differences between water requirement for irrigation and actual supply were lower than 10% and inter-annual variability was well represented as well. The work, initially focused on summer crops, is being adapted to winter crops.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
NASA Astrophysics Data System (ADS)
Elbakary, Mohamed I.; Iftekharuddin, Khan M.; Papelis, Yiannis; Newman, Brett
2017-05-01
Air Traffic Management (ATM) concepts are commonly tested in simulation to obtain preliminary results and validate the concepts before adoption. Recently, the researchers found that simulation is not enough because of complexity associated with ATM concepts. In other words, full-scale tests must eventually take place to provide compelling performance evidence before adopting full implementation. Testing using full-scale aircraft produces a high-cost approach that yields high-confidence results but simulation provides a low-risk/low-cost approach with reduced confidence on the results. One possible approach to increase the confidence of the results and simultaneously reduce the risk and the cost is using unmanned sub-scale aircraft in testing new concepts for ATM. This paper presents the simulation results of using unmanned sub-scale aircraft in implementing ATM concepts compared to the full scale aircraft. The results of simulation show that the performance of sub-scale is quite comparable to that of the full-scale which validates use of the sub-scale in testing new ATM concepts. Keywords: Unmanned
Yield Response of Spring Maize to Inter-Row Subsoiling and Soil Water Deficit in Northern China.
Liu, Zhandong; Qin, Anzhen; Zhao, Ben; Ata-Ul-Karim, Syed Tahir; Xiao, Junfu; Sun, Jingsheng; Ning, Dongfeng; Liu, Zugui; Nan, Jiqin; Duan, Aiwang
2016-01-01
Long-term tillage has been shown to induce water stress episode during crop growth period due to low water retention capacity. It is unclear whether integrated water conservation tillage systems, such asspringdeepinter-row subsoiling with annual or biennial repetitions, can be developed to alleviate this issue while improve crop productivity. Experimentswere carried out in a spring maize cropping system on Calcaric-fluvicCambisolsatJiaozuoexperimentstation, northern China, in 2009 to 2014. Effects of threesubsoiling depths (i.e., 30 cm, 40 cm, and 50 cm) in combination with annual and biennial repetitionswasdetermined in two single-years (i.e., 2012 and 2014)againstthe conventional tillage. The objectives were to investigateyield response to subsoiling depths and soil water deficit(SWD), and to identify the most effective subsoiling treatment using a systematic assessment. Annualsubsoiling to 50 cm (AS-50) increased soil water storage (SWS, mm) by an average of8% in 0-20 cm soil depth, 19% in 20-80 cm depth, and 10% in 80-120 cm depth, followed by AS-40 and BS-50, whereas AS-30 and BS-30 showed much less effects in increasing SWS across the 0-120 cm soil profile, compared to the CK. AS-50 significantly reduced soil water deficit (SWD, mm) by an average of123% during sowing to jointing, 318% during jointing to filling, and 221% during filling to maturity, compared to the CK, followed by AS-40 and BS-50. An integrated effect on increasing SWS and reducing SWD helped AS-50 boost grain yield by an average of 31% and biomass yield by 30%, compared to the CK. A power function for subsoiling depth and a negative linear function for SWD were used to fit the measured yields, showing the deepest subsoiling depth (50 cm) with the lowest SWD contributed to the highest yield. Systematic assessment showed that AS-50 received the highest evaluation index (0.69 out of 1.0) among all treatments. Deepinter-row subsoilingwith annual repetition significantly boosts yield by alleviating SWD in critical growth period and increasing SWS in 20-80 cm soil depth. The results allow us to conclude that AS-50 can be adopted as an effective approach to increase crop productivity, alleviate water stress, and improve soil water availability for spring maize in northern China.
The estimation of absorbed dose rates for non-human biota : an extended inter-comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batlle, J. V. I.; Beaugelin-Seiller, K.; Beresford, N. A.
An exercise to compare 10 approaches for the calculation of unweighted whole-body absorbed dose rates was conducted for 74 radionuclides and five of the ICRP's Reference Animals and Plants, or RAPs (duck, frog, flatfish egg, rat and elongated earthworm), selected for this exercise to cover a range of body sizes, dimensions and exposure scenarios. Results were analysed using a non-parametric method requiring no specific hypotheses about the statistical distribution of data. The obtained unweighted absorbed dose rates for internal exposure compare well between the different approaches, with 70% of the results falling within a range of variation of {+-}20%. Themore » variation is greater for external exposure, although 90% of the estimates are within an order of magnitude of one another. There are some discernible patterns where specific models over- or under-predicted. These are explained based on the methodological differences including number of daughter products included in the calculation of dose rate for a parent nuclide; source-target geometry; databases for discrete energy and yield of radionuclides; rounding errors in integration algorithms; and intrinsic differences in calculation methods. For certain radionuclides, these factors combine to generate systematic variations between approaches. Overall, the technique chosen to interpret the data enabled methodological differences in dosimetry calculations to be quantified and compared, allowing the identification of common issues between different approaches and providing greater assurance on the fundamental dose conversion coefficient approaches used in available models for assessing radiological effects to biota.« less
Antar, A; Otrock, Z K; Kharfan-Dabaja, M A; Ghaddara, H A; Kreidieh, N; Mahfouz, R; Bazarbachi, A
2015-06-01
The optimal stem cell mobilization regimen for patients with multiple myeloma (MM) remains undefined. We retrospectively compared our experience in hematopoietic cell mobilization in 83 MM patients using fractionated high-dose CY and G-CSF with G-CSF plus preemptive plerixafor. All patients in the CY group (n=56) received fractionated high-dose CY (5 g/m(2) divided into five doses of 1 g/m(2) every 3 h) with G-CSF. All patients in the plerixafor group (n=27) received G-CSF and plerixafor preemptively based on an established algorithm. Compared with plerixafor, CY use was associated with higher total CD34+ cell yield (7.5 × 10(6) vs 15.5 × 10(6) cells/kg, P=0.005). All patients in both groups yielded ⩾4 × 10(6) CD34+ cells/kg. Conversely, CY use was associated with high frequency of febrile neutropenia, blood and platelet transfusions need and hospitalizations. The average total cost of mobilization in Lebanon was slightly higher in the plerixafor group ($7886 vs $7536; P=0.16). Our data indicate robust stem cell mobilization in MM patients with either fractionated high-dose CY and G-CSF or G-CSF alone with preemptive plerixafor. The chemo-mobilization approach was associated with twofold stem cell yield, slightly lower cost but significantly increased toxicity.
Estimation of Rice Crop Yields Using Random Forests in Taiwan
NASA Astrophysics Data System (ADS)
Chen, C. F.; Lin, H. S.; Nguyen, S. T.; Chen, C. R.
2017-12-01
Rice is globally one of the most important food crops, directly feeding more people than any other crops. Rice is not only the most important commodity, but also plays a critical role in the economy of Taiwan because it provides employment and income for large rural populations. The rice harvested area and production are thus monitored yearly due to the government's initiatives. Agronomic planners need such information for more precise assessment of food production to tackle issues of national food security and policymaking. This study aimed to develop a machine-learning approach using physical parameters to estimate rice crop yields in Taiwan. We processed the data for 2014 cropping seasons, following three main steps: (1) data pre-processing to construct input layers, including soil types and weather parameters (e.g., maxima and minima air temperature, precipitation, and solar radiation) obtained from meteorological stations across the country; (2) crop yield estimation using the random forests owing to its merits as it can process thousands of variables, estimate missing data, maintain the accuracy level when a large proportion of the data is missing, overcome most of over-fitting problems, and run fast and efficiently when handling large datasets; and (3) error verification. To execute the model, we separated the datasets into two groups of pixels: group-1 (70% of pixels) for training the model and group-2 (30% of pixels) for testing the model. Once the model is trained to produce small and stable out-of-bag error (i.e., the mean squared error between predicted and actual values), it can be used for estimating rice yields of cropping seasons. The results obtained from the random forests-based regression were compared with the actual yield statistics indicated the values of root mean square error (RMSE) and mean absolute error (MAE) achieved for the first rice crop were respectively 6.2% and 2.7%, while those for the second rice crop were 5.3% and 2.9%, respectively. Although there are several uncertainties attributed to the data quality of input layers, our study demonstrates the promising application of random forests for estimating rice crop yields at the national level in Taiwan. This approach could be transferable to other regions of the world for improving large-scale estimation of rice crop yields.
Comparing estimates of climate change impacts from process-based and statistical crop models
NASA Astrophysics Data System (ADS)
Lobell, David B.; Asseng, Senthold
2017-01-01
The potential impacts of climate change on crop productivity are of widespread interest to those concerned with addressing climate change and improving global food security. Two common approaches to assess these impacts are process-based simulation models, which attempt to represent key dynamic processes affecting crop yields, and statistical models, which estimate functional relationships between historical observations of weather and yields. Examples of both approaches are increasingly found in the scientific literature, although often published in different disciplinary journals. Here we compare published sensitivities to changes in temperature, precipitation, carbon dioxide (CO2), and ozone from each approach for the subset of crops, locations, and climate scenarios for which both have been applied. Despite a common perception that statistical models are more pessimistic, we find no systematic differences between the predicted sensitivities to warming from process-based and statistical models up to +2 °C, with limited evidence at higher levels of warming. For precipitation, there are many reasons why estimates could be expected to differ, but few estimates exist to develop robust comparisons, and precipitation changes are rarely the dominant factor for predicting impacts given the prominent role of temperature, CO2, and ozone changes. A common difference between process-based and statistical studies is that the former tend to include the effects of CO2 increases that accompany warming, whereas statistical models typically do not. Major needs moving forward include incorporating CO2 effects into statistical studies, improving both approaches’ treatment of ozone, and increasing the use of both methods within the same study. At the same time, those who fund or use crop model projections should understand that in the short-term, both approaches when done well are likely to provide similar estimates of warming impacts, with statistical models generally requiring fewer resources to produce robust estimates, especially when applied to crops beyond the major grains.
Barrett, Bruce; Brown, Roger; Mundt, Marlon
2008-02-01
Evaluative health-related quality-of-life instruments used in clinical trials should be able to detect small but important changes in health status. Several approaches to minimal important difference (MID) and responsiveness have been developed. To compare anchor-based and distributional approaches to important difference and responsiveness for the Wisconsin Upper Respiratory Symptom Survey (WURSS), an illness-specific quality of life outcomes instrument. Participants with community-acquired colds self-reported daily using the WURSS-44. Distribution-based methods calculated standardized effect size (ES) and standard error of measurement (SEM). Anchor-based methods compared daily interval changes to global ratings of change, using: (1) standard MID methods based on correspondence to ratings of "a little better" or "somewhat better," and (2) two-level multivariate regression models. About 150 adults were monitored throughout their colds (1,681 sick days.): 88% were white, 69% were women, and 50% had completed college. The mean age was 35.5 years (SD = 14.7). WURSS scores increased 2.2 points from the first to second day, and then dropped by an average of 8.2 points per day from days 2 to 7. The SEM averaged 9.1 during these 7 days. Standard methods yielded a between day MID of 22 points. Regression models of MID projected 11.3-point daily changes. Dividing these estimates of small-but-important-difference by pooled SDs yielded coefficients of .425 for standard MID, .218 for regression model, .177 for SEM, and .157 for ES. These imply per-group sample sizes of 870 using ES, 616 for SEM, 302 for regression model, and 89 for standard MID, assuming alpha = .05, beta = .20 (80% power), and two-tailed testing. Distribution and anchor-based approaches provide somewhat different estimates of small but important difference, which in turn can have substantial impact on trial design.
Kaur, Harparkash; Allan, Elizabeth Louise; Mamadu, Ibrahim; Hall, Zoe; Ibe, Ogochukwu; El Sherbiny, Mohamed; van Wyk, Albert; Yeung, Shunmay; Swamidoss, Isabel; Green, Michael D.; Dwivedi, Prabha; Culzoni, Maria Julia; Clarke, Siân; Schellenberg, David; Fernández, Facundo M.; Onwujekwe, Obinna
2015-01-01
Background Artemisinin-based combination therapies are recommended by the World Health Organisation (WHO) as first-line treatment for Plasmodium falciparum malaria, yet medication must be of good quality for efficacious treatment. A recent meta-analysis reported 35% (796/2,296) of antimalarial drug samples from 21 Sub-Saharan African countries, purchased from outlets predominantly using convenience sampling, failed chemical content analysis. We used three sampling strategies to purchase artemisinin-containing antimalarials (ACAs) in Enugu metropolis, Nigeria, and compared the resulting quality estimates. Methods ACAs were purchased using three sampling approaches - convenience, mystery clients and overt, within a defined area and sampling frame in Enugu metropolis. The active pharmaceutical ingredients were assessed using high-performance liquid chromatography and confirmed by mass spectrometry at three independent laboratories. Results were expressed as percentage of APIs stated on the packaging and used to categorise each sample as acceptable quality, substandard, degraded, or falsified. Results Content analysis of 3024 samples purchased from 421 outlets using convenience (n=200), mystery (n=1,919) and overt (n=905) approaches, showed overall 90.8% ACAs to be of acceptable quality, 6.8% substandard, 1.3% degraded and 1.2% falsified. Convenience sampling yielded a significantly higher prevalence of poor quality ACAs, but was not evident by the mystery and overt sampling strategies both of which yielded results that were comparable between each other. Artesunate (n=135; 4 falsified) and dihydroartemisinin (n=14) monotherapy tablets, not recommended by WHO, were also identified. Conclusion Randomised sampling identified fewer falsified ACAs than previously reported by convenience approaches. Our findings emphasise the need for specific consideration to be given to sampling frame and sampling approach if representative information on drug quality is to be obtained. PMID:26018221
NASA Technical Reports Server (NTRS)
Pierzga, M. J.; Wood, J. R.
1984-01-01
An experimental investigation of the three dimensional flow field through a low aspect ratio, transonic, axial flow fan rotor has been conducted using an advanced laser anemometer (LA) system. Laser velocimeter measurements of the rotor flow field at the design operating speed and over a range of through flow conditions are compared to analytical solutions. The numerical technique used herein yields the solution to the full, three dimensional, unsteady Euler equations using an explicit time marching, finite volume approach. The numerical analysis, when coupled with a simplified boundary layer calculation, generally yields good agreement with the experimental data. The test rotor has an aspect ratio of 1.56, a design total pressure ratio of 1.629 and a tip relative Mach number of 1.38. The high spatial resolution of the LA data matrix (9 radial by 30 axial by 50 blade to blade) permits details of the transonic flow field such as shock location, turning distribution and blade loading levels to be investigated and compared to analytical results.
High temperature pre-digestion of corn stover biomass for improved product yields
Brunecky, Roman; Hobdey, Sarah E.; Taylor, Larry E.; ...
2014-12-03
Introduction: The efficient conversion of lignocellulosic feedstocks remains a key step in the commercialization of biofuels. One of the barriers to cost-effective conversion of lignocellulosic biomass to sugars remains the enzymatic saccharification process step. Here, we describe a novel hybrid processing approach comprising enzymatic pre-digestion with newly characterized hyperthermophilic enzyme cocktails followed by conventional saccharification with commercial enzyme preparations. Dilute acid pretreated corn stover was subjected to this new procedure to test its efficacy. Thermal tolerant enzymes from Acidothermus cellulolyticus and Caldicellulosiruptor bescii were used to pre-digest pretreated biomass at elevated temperatures prior to saccharification by the commercial cellulase formulation.more » Results: We report that pre-digestion of biomass with these enzymes at elevated temperatures prior to addition of the commercial cellulase formulation increased conversion rates and yields when compared to commercial cellulase formulation alone under low solids conditions. In conclusion, Our results demonstrating improvements in rates and yields of conversion point the way forward for hybrid biomass conversion schemes utilizing catalytic amounts of hyperthermophilic enzymes.« less
Optimization for Peptide Sample Preparation for Urine Peptidomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan
2014-02-25
Analysis of native or endogenous peptides in biofluids can provide valuable insights into disease mechanisms. Furthermore, the detected peptides may also have utility as potential biomarkers for non-invasive monitoring of human diseases. The non-invasive nature of urine collection and the abundance of peptides in the urine makes analysis by high-throughput ‘peptidomics’ methods , an attractive approach for investigating the pathogenesis of renal disease. However, urine peptidomics methodologies can be problematic with regards to difficulties associated with sample preparation. The urine matrix can provide significant background interference in making the analytical measurements that it hampers both the identification of peptides andmore » the depth of the peptidomics read when utilizing LC-MS based peptidome analysis. We report on a novel adaptation of the standard solid phase extraction (SPE) method to a modified SPE (mSPE) approach for improved peptide yield and analysis sensitivity with LC-MS based peptidomics in terms of time, cost, clogging of the LC-MS column, peptide yield, peptide quality, and number of peptides identified by each method. Expense and time requirements were comparable for both SPE and mSPE, but more interfering contaminants from the urine matrix were evident in the SPE preparations (e.g., clogging of the LC-MS columns, yellowish background coloration of prepared samples due to retained urobilin, lower peptide yields) when compared to the mSPE method. When we compared data from technical replicates of 4 runs, the mSPE method provided significantly improved efficiencies for the preparation of samples from urine (e.g., mSPE peptide identification 82% versus 18% with SPE; p = 8.92E-05). Additionally, peptide identifications, when applying the mSPE method, highlighted the biology of differential activation of urine peptidases during acute renal transplant rejection with distinct laddering of specific peptides, which was obscured for most proteins when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.« less
Adapted random sampling patterns for accelerated MRI.
Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf
2011-02-01
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
Development of a CMOS-compatible PCR chip: comparison of design and system strategies
NASA Astrophysics Data System (ADS)
Erill, Ivan; Campoy, Susana; Rus, José; Fonseca, Luis; Ivorra, Antoni; Navarro, Zenón; Plaza, José A.; Aguiló, Jordi; Barbé, Jordi
2004-11-01
In the last decade research in chips for DNA amplification through the polymerase chain reaction (PCR) has been relatively abundant, but has taken very diverse approaches, leaving little common ground for a straightforward comparison of results. Here we report the development of a line of PCR chips that is fully compatible with complementary-metal-oxide-semiconductor (CMOS) technology and its revealing use as a general platform to test and compare a wide range of experimental parameters involved in PCR-chip design and operation. Peltier-heated and polysilicon thin-film driven PCR chips have been produced and directly compared in terms of efficiency, speed and power consumption, showing that thin-film systems run faster and more efficiently than Peltier-based ones, but yield inferior PCR products. Serpentine-like chamber designs have also been compared with standard rectangular designs and with the here reported rhomboidal chamber shape, showing that serpentine-like chambers do not have detrimental effects in PCR efficiency when using non-flow-through schemes, and that chamber design has a strong impact on sample insertion/extraction yields. With an accurate temperature control (±0.2 °C) we have optimized reaction kinetics to yield sound PCR amplifications of 25 µl mixtures in 20 min and with 24.4 s cycle times, confirming that a titrated amount of bovine albumin serum (BSA, 2.5 µg µl-1) is essential to counteract polymerase adsorption at chip walls. The reported use of a CMOS-compatible technological process paves the way for an easy adaption to foundry requirements and for a scalable integration of electro-optic detection and control circuitry.
Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan
2017-08-28
The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical programming language and the Python program HeatMapWrapper [ https://doi.org/10.5281/zenodo.495163 ] for heat map generation.
Inelastic response of metal matrix composites under biaxial loading
NASA Technical Reports Server (NTRS)
Mirzadeh, F.; Pindera, Marek-Jerzy; Herakovich, Carl T.
1990-01-01
Elements of the analytical/experimental program to characterize the response of silicon carbide titanium (SCS-6/Ti-15-3) composite tubes under biaxial loading are outlined. The analytical program comprises prediction of initial yielding and subsequent inelastic response of unidirectional and angle-ply silicon carbide titanium tubes using a combined micromechanics approach and laminate analysis. The micromechanics approach is based on the method of cells model and has the capability of generating the effective thermomechanical response of metal matrix composites in the linear and inelastic region in the presence of temperature and time-dependent properties of the individual constituents and imperfect bonding on the initial yield surfaces and inelastic response of (0) and (+ or - 45)sub s SCS-6/Ti-15-3 laminates loaded by different combinations of stresses. The generated analytical predictions will be compared with the experimental results. The experimental program comprises generation of initial yield surfaces, subsequent stress-strain curves and determination of failure loads of the SCS-6/Ti-15-3 tubes under selected loading conditions. The results of the analytical investigation are employed to define the actual loading paths for the experimental program. A brief overview of the experimental methodology is given. This includes the test capabilities of the Composite Mechanics Laboratory at the University of Virginia, the SCS-6/Ti-15-3 composite tubes secured from McDonnell Douglas Corporation, a text fixture specifically developed for combined axial-torsional loading, and the MTS combined axial-torsion loader that will be employed in the actual testing.
NASA Astrophysics Data System (ADS)
Banavath, Jayanna N.; Chakradhar, Thammineni; Pandit, Varakumar; Konduru, Sravani; Guduru, Krishna K.; Akila, Chandra S.; Podha, Sudhakar; Puli, Chandra O. R.
2018-03-01
Peanut is an important oilseed and food legume cultivated as a rain-fed crop in semi-arid tropics. Drought and high salinity are the major abiotic stresses limiting the peanut productivity in this region. Development of drought and salt tolerant peanut varieties with improved yield potential using biotechnological approach is highly desirable to improve the peanut productivity in marginal geographies. As abiotic stress tolerance and yield represent complex traits, engineering of regulatory genes to produce abiotic stress-resilient transgenic crops appears to be a viable approach. In the present study, we developed transgenic peanut plants expressing an Arabidopsis homeodomain-leucine zipper transcription factor (AtHDG11) under stress inducible rd29Apromoter. A stress-inducible expression of AtHDG11 in three independent homozygous transgenic peanut lines resulted in improved drought and salt tolerance through up-regulation of known stress responsive genes(LEA, HSP70, Cu/Zn SOD, APX, P5CS, NCED1, RRS5, ERF1, NAC4, MIPS, Aquaporin, TIP, ELIP ) in the stress gene network , antioxidative enzymes, free proline along with improved water use efficiency traits such as longer root system, reduced stomatal density, higher chlorophyll content, increased specific leaf area, improved photosynthetic rates and increased intrinsic instantaneous WUE. Transgenic peanut plants displayed high yield compared to non-transgenic plants under both drought and salt stress conditions. Holistically, our study demonstrates the potentiality of stress-induced expression of AtHDG11 to improve the drought, salt tolerance in peanut.
Bacterial impregnation of mineral fertilizers improves yield and nutrient use efficiency of wheat.
Ahmad, Shakeel; Imran, Muhammad; Hussain, Sabir; Mahmood, Sajid; Hussain, Azhar; Hasnain, Muhammad
2017-08-01
The fertilizer use efficiency (FUE) of agricultural crops is generally low, which results in poor crop yields and low economic benefits to farmers. Among the various approaches used to enhance FUE, impregnation of mineral fertilizers with plant growth-promoting bacteria (PGPB) is attracting worldwide attention. The present study was aimed to improve growth, yield and nutrient use efficiency of wheat by bacterially impregnated mineral fertilizers. Results of the pot study revealed that impregnation of diammonium phosphate (DAP) and urea with PGPB was helpful in enhancing the growth, yield, photosynthetic rate, nitrogen use efficiency (NUE) and phosphorus use efficiency (PUE) of wheat. However, the plants treated with F8 type DAP and urea, prepared by coating a slurry of PGPB (Bacillus sp. strain KAP6) and compost on DAP and urea granules at the rate of 2.0 g 100 g -1 fertilizer, produced better results than other fertilizer treatments. In this treatment, growth parameters including plant height, root length, straw yield and root biomass significantly (P ≤ 0.05) increased from 58.8 to 70.0 cm, 41.2 to 50.0 cm, 19.6 to 24.2 g per pot and 1.8 to 2.2 g per pot, respectively. The same treatment improved grain yield of wheat by 20% compared to unimpregnated DAP and urea (F0). Likewise, the maximum increase in photosynthetic rate, grain NP content, grain NP uptake, NUE and PUE of wheat were also recorded with F8 treatment. The results suggest that the application of bacterially impregnated DAP and urea is highly effective for improving growth, yield and FUE of wheat. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Tawfik-Shukor, Ali R; Klazinga, Niek S; Arah, Onyebuchi A
2007-01-01
Background Given the proliferation and the growing complexity of performance measurement initiatives in many health systems, the Netherlands and Ontario, Canada expressed interests in cross-national comparisons in an effort to promote knowledge transfer and best practise. To support this cross-national learning, a study was undertaken to compare health system performance approaches in The Netherlands with Ontario, Canada. Methods We explored the performance assessment framework and system of each constituency, the embeddedness of performance data in management and policy processes, and the interrelationships between the frameworks. Methods used included analysing governmental strategic planning and policy documents, literature and internet searches, comparative descriptive tables, and schematics. Data collection and analysis took place in Ontario and The Netherlands. A workshop to validate and discuss the findings was conducted in Toronto, adding important insights to the study. Results Both Ontario and The Netherlands conceive health system performance within supportive frameworks. However they differ in their assessment approaches. Ontario's Scorecard links performance measurement with strategy, aimed at health system integration. The Dutch Health Care Performance Report (Zorgbalans) does not explicitly link performance with strategy, and focuses on the technical quality of healthcare by measuring dimensions of quality, access, and cost against healthcare needs. A backbone 'five diamond' framework maps both frameworks and articulates the interrelations and overlap between their goals, themes, dimensions and indicators. The workshop yielded more contextual insights and further validated the comparative values of each constituency's performance assessment system. Conclusion To compare the health system performance approaches between The Netherlands and Ontario, Canada, several important conceptual and contextual issues must be addressed, before even attempting any future content comparisons and benchmarking. Such issues would lend relevant interpretational credibility to international comparative assessments of the two health systems. PMID:17319947
Maheshwari, D K; Dubey, R C; Aeron, Abhinav; Kumar, Bhavesh; Kumar, Sandeep; Tewari, Sakshi; Arora, Naveen Kumar
2012-10-01
Azotobacter chroococcum TRA2, an isolate of wheat rhizosphere displayed plant growth promoting attributes including indole acetic acid, HCN, siderophore production, solubilization of inorganic phosphate and fixation of atmospheric nitrogen. In addition, it showed strong antagonistic effect against Macrophomina phaseolina and Fusarium oxysporum. It also caused degradation and digestion of cell wall components, resulting in hyphal perforations, empty cell (halo) formation, shrinking and lysis of fungal mycelia along with significant degeneration of conidia. Fertilizer adaptive variant strain of A. chroococcum TRA2 was studied with Tn5 induced streptomycin resistant transconjugants of wild type tetracycline-resistant TRA2 (designated TRA2(tetra+strep+)) after different durations. The strain was significantly competent in rhizosphere, as its population increased by 15.29 % in rhizosphere of Sesamum indicum. Seed bacterization with the strain TRA2 resulted in significant increase in vegetative growth parameters and yield of sesame over the non-bacterized seeds. However, application of TRA2 with half dose of fertilizers showed sesame yield almost similar to that obtained by full dose treatment. Moreover, the oil yield increased by 24.20 %, while protein yield increased by 35.92 % in treatment receiving half dose of fertilizer along with TRA2 bacterized seeds, as compared to untreated control.
Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C
2014-09-01
Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait 'total crop nitrogen uptake' (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10-36 % more yield than those based on markers for yield per se. This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Recruitment methods for survey research: Findings from the Mid-South Clinical Data Research Network.
Heerman, William J; Jackson, Natalie; Roumie, Christianne L; Harris, Paul A; Rosenbloom, S Trent; Pulley, Jill; Wilkins, Consuelo H; Williams, Neely A; Crenshaw, David; Leak, Cardella; Scherdin, Jon; Muñoz, Daniel; Bachmann, Justin; Rothman, Russell L; Kripalani, Sunil
2017-11-01
The objective of this study was to report survey response rates and demographic characteristics of eight recruitment approaches to determine acceptability and effectiveness of large-scale patient recruitment among various populations. We conducted a cross sectional analysis of survey data from two large cohorts. Patients were recruited from the Mid-South Clinical Data Research Network using clinic-based recruitment, research registries, and mail, phone, and email approaches. Response rates are reported as patients who consented for the survey divided by the number of eligible patients approached. We contacted more than 90,000 patients and 13,197 patients completed surveys. Median age was 56.3years (IQR 40.9, 67.4). Racial/ethnic distribution was 84.1% White, non-Hispanic; 9.9% Black, non-Hispanic; 1.8% Hispanic; and 4.0% other, non-Hispanic. Face-to-face recruitment had the highest response rate of 94.3%, followed by participants who "opted-in" to a registry (76%). The lowest response rate was for unsolicited emails from the clinic (6.1%). Face-to-face recruitment enrolled a higher percentage of participants who self-identified as Black, non-Hispanic compared to other approaches (18.6% face-to-face vs. 8.4% for email). Technology-enabled recruitment approaches such as registries and emails are effective for recruiting but may yield less racial/ethnic diversity compared to traditional, more time-intensive approaches. Copyright © 2017. Published by Elsevier Inc.
Promsuwicha, Orathai; Kankhao, Supattra; Songmuang, Wayuree; Auewarakul, Chirayu U
2014-12-01
Diagnosis of hematologic malignancies requires a multidisciplinary approach. Flow cytometry (FCM) has become an essential tool for immunophenotypic studies of malignant hematopoietic cells. To evaluate the utilization trend of FCM and its diagnostic yields for hematologic malignancy at a major teaching hospital in Thailand. FCM results of bone marrow (BM) and peripheral blood (PB) specimens during 2000-2013 were analyzed and compared to clinical diagnosis. Overall, 7,982 specimens were submitted for diagnostic FCM including 6,561 BM and 1,421 PB. The number of specimens analyzedwas 121, 142, 164, 299, 491, 431, 690, 611, 719, 744, 725, 863, 955 and 1,027, respectively, from 2000 to 2013. The most common clinical diagnoses requested for FCM were acute leukemia (5,911 cases, 74%) followed by lymphoma (1,419 cases, 17.8%), and chronic lymphocytic leukemia (CLL) (634 cases, 7.94%). The highest diagnostic yield of FCM was found in acute leukemia cases (69.71%) followed by CLL (35.33%). Only 15.43% of clinically suspected lymphoma cases were positive by FCM. Overutilization of PB (35.6% of cases) instead of BM for lymphoma staging significantly contributed to low diagnostic yields of lymphoma by FCM as circulating tumor cells may not be present in such cases. FCM has an increasing role in the diagnosis of hematologic malignancies in Thai patients over the past 14 years with the highest diagnostic yield in acute leukemia. Appropriate specimen types and study indications are required in order to reduce futility of costly diagnostic tests and improve diagnostic yields.
Ofori, Atta; Schierholt, Antje; Becker, Heiko C
2012-02-01
Because of its high growth rate at low temperatures in early spring, there is renewed interest in Brassica rapa as a winter crop for biomass production in Europe. The available cultivars are not developed for this purpose however. An approach for breeding bioenergy cultivars of B. rapa could be to establish populations from two or more different cultivars with high combining ability. The objective of this study was to evaluate the heterosis for biomass yield in the European winter B. rapa genepool. The genetic variation and heterosis of the biomass parameters: dry matter content, fresh and dry biomass yields were investigated in three cultivars representing different eras of breeding by comparing full-sibs-within and full-sibs-between the cultivars. Field trials were performed at two locations in Germany in 2005-2006. Mean mid-parent heterosis was low with 2.5% in fresh and 3.0% in dry biomass yield in full-sibs-between cultivars. Mean values of individual crosses revealed a higher variation in mid-parent heterosis ranging from 14.6% to -7.5% in fresh biomass yield and from 19.7% to -12.7% in dry biomass yield. The low heterosis observed in hybrids between European winter cultivars can be explained by the low genetic variation between these cultivars as shown earlier with molecular markers. In conclusion, a B. rapa breeding program for biomass production in Europe should not only use European genetic resources, but should also utilize the much wider worldwide variation in this species.
Mind the Roots: Phenotyping Below-Ground Crop Diversity and Its Influence on Final Yield
NASA Astrophysics Data System (ADS)
Nieters, C.; Guadagno, C. R.; Lemli, S.; Hosseini, A.; Ewers, B. E.
2017-12-01
Changes in global climate patterns and water regimes are having profound impacts on worldwide crop production. An ever-growing population paired with increasing temperatures and unpredictable periods of severe drought call for accurate modeling of future crop yield. Although novel approaches are being developed in high-throughput, above-ground image phenotyping, the below-ground plant system is still poorly phenotyped. Collection of plant root morphology and hydraulics are needed to inform mathematical models to reliably estimate yields of crops grown in sub-optimal conditions. We used Brassica rapa to inform our model as it is a globally cultivated crop with several functionally diverse cultivars. Specifically, we use 7 different accessions from oilseed (R500 and Yellow Sarson), leafy type (Pac choi and Chinese cabbage), a vegetable turnip, and two Wisconsin Fast Plants (Imb211 and Fast Plant self-compatible), which have shorter life cycles and potentially large differences in allocation to roots. Bi-weekly, we harvested above and below-ground biomass to compare the varieties in terms of carbon allocation throughout their life cycle. Using WinRhizo software, we analyzed root system length and surface area to compare and contrast root morphology among cultivars. Our results confirm that root structural characteristics are crucial to explain plant water use and carbon allocation. The root:shoot ratio reveals a significant (p < 0.01) difference among crop accession. To validate the procedure across different varieties and life stages we also compared surface area results from the image-based technology to dry biomass finding a strong linear relationship (R2= 0.85). To assess the influence of a diverse above-ground morphology on the root system we also measured above-ground anatomical and physiological traits such as gas exchange, chlorophyll content, and chlorophyll a fluorescence. A thorough analysis of the root system will clarify carbon dynamics and hydraulics at the whole-plant level, improving final yield predictions.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Rubin, Yoram; Maxwell, Reed M.
2009-06-01
Defining rational and effective hydrogeological data acquisition strategies is of crucial importance as such efforts are always resource limited. Usually, strategies are developed with the goal of reducing uncertainty, but less often they are developed in the context of their impacts on uncertainty. This paper presents an approach for determining site characterization needs on the basis of human health risk. The main challenge is in striking a balance between reduction in uncertainty in hydrogeological, behavioral, and physiological parameters. Striking this balance can provide clear guidance on setting priorities for data acquisition and for better estimating adverse health effects in humans. This paper addresses this challenge through theoretical developments and numerical simulation. A wide range of factors that affect site characterization needs are investigated, including the dimensions of the contaminant plume and additional length scales that characterize the transport problem, as well as the model of human health risk. The concept of comparative information yield curves is used for investigating the relative impact of hydrogeological and physiological parameters in risk. Results show that characterization needs are dependent on the ratios between flow and transport scales within a risk-driven approach. Additionally, the results indicate that human health risk becomes less sensitive to hydrogeological measurements for large plumes. This indicates that under near-ergodic conditions, uncertainty reduction in human health risk may benefit from better understanding of the physiological component as opposed to a more detailed hydrogeological characterization.
Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An
2017-11-08
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Gao, Lili
2017-01-01
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096
Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea
2014-03-15
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.
Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea
2014-01-01
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Chiara, Gabriele; Fazio, Rosario; Montangero, Simone
In this paper we present an approach to quantum cloning with unmodulated spin networks. The cloner is realized by a proper design of the network and a choice of the coupling between the qubits. We show that in the case of phase covariant cloner the XY coupling gives the best results. In the 1{yields}2 cloning we find that the value for the fidelity of the optimal cloner is achieved, and values comparable to the optimal ones in the general N{yields}M case can be attained. If a suitable set of network symmetries are satisfied, the output fidelity of the clones doesmore » not depend on the specific choice of the graph. We show that spin network cloning is robust against the presence of static imperfections. Moreover, in the presence of noise, it outperforms the conventional approach. In this case the fidelity exceeds the corresponding value obtained by quantum gates even for a very small amount of noise. Furthermore, we show how to use this method to clone qutrits and qudits. By means of the Heisenberg coupling it is also possible to implement the universal cloner although in this case the fidelity is 10% off that of the optimal cloner.« less
Zhang, Xiang; Dunlow, Ryan; Blackman, Burchelle N; Swenson, Rolf E
2018-05-15
Traditional radiosynthetic optimization faces the challenges of high radiation exposure, cost, and inability to perform serial reactions due to tracer decay. To accelerate tracer development, we have developed a strategy to simulate radioactive 18 F-syntheses by using tracer-level (nanomolar) non-radioactive 19 F-reagents and LC-MS/MS analysis. The methodology was validated with fallypride synthesis under tracer-level 19 F-conditions, which showed reproducible and comparable results with radiosynthesis, and proved the feasibility of this process. Using this approach, the synthesis of [ 18 F]MDL100907 was optimized under 19 F-conditions with greatly improved yield. The best conditions were successfully transferred to radiosynthesis. A radiochemical yield of 19% to 22% was achieved with the radiochemical purity >99% and the molar activity 38.8 to 53.6 GBq/ μmol (n = 3). The tracer-level 19 F-approach provides a high-throughput and cost-effective process to optimize radiosynthesis with reduced radiation exposure. This new method allows medicinal and synthetic chemists to optimize radiolabeling conditions without the need to use radioactivity. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Puente, Carlos E.; Maskey, Mahesh L.; Sivakumar, Bellie
2017-04-01
A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted in order to encode highly intermittent daily rainfall records observed over a year. Using such a notion, this research investigates the complexity of rainfall in various stations within the State of California. Specifically, records gathered at (from South to North) Cherry Valley, Merced, Sacramento and Shasta Dam, containing 59, 116, 115 and 72 years, all ending at water year 2015, were encoded and analyzed in detail. The analysis reveals that: (a) the FM approach yields faithful encodings of all records, by years, with mean square and maximum errors in accumulated rain that are less than a mere 2% and 10%, respectively; (b) the evolution of the corresponding "best" FM parameters, allowing visualization of the inter-annual rainfall dynamics from a reduced vantage point, exhibit implicit variability that precludes discriminating between sites and extrapolating to the future; (c) the evolution of the FM parameters, restricted to specific regions within space, allows finding sensible future simulations; and (d) the rain signals at all sites may be termed "equally complex," as usage of k-means clustering and conventional phase space analysis of FM parameters yields comparable results for all sites.
Suitability assessment and mapping of Oyo State, Nigeria, for rice cultivation using GIS
NASA Astrophysics Data System (ADS)
Ayoade, Modupe Alake
2017-08-01
Rice is one of the most preferred food crops in Nigeria. However, local rice production has declined with the oil boom of the 1970s causing demand to outstrip supply. Rice production can be increased through the integration of Geographic Information Systems (GIS) and crop-land suitability analysis and mapping. Based on the key predictor variables that determine rice yield mentioned in relevant literature, data on rainfall, temperature, relative humidity, slope, and soil of Oyo state were obtained. To develop rice suitability maps for the state, two MCE-GIS techniques, namely the Overlay approach and weighted linear combination (WLC), using fuzzy AHP were used and compared. A Boolean land use map derived from a landsat imagery was used in masking out areas currently unavailable for rice production. Both suitability maps were classified into four categories of very suitable, suitable, moderate, and fairly moderate. Although the maps differ slightly, the overlay and WLC (AHP) approach found most parts of Oyo state (51.79 and 82.9 % respectively) to be moderately suitable for rice production. However, in areas like Eruwa, Oyo, and Shaki, rainfall amount received needs to be supplemented by irrigation for increased rice yield.
The ethics of medical involvement in torture: commentary
Hare, R M
1993-01-01
Torture does need to be defined if we are to know exactly what we are seeking to ban; but no single definition will do, because there are many possible ones, and we may want to treat different practices that might be called torture differently. Compare the case of homicide; we do not want to punish manslaughter as severely as murder, and may not want to punish killing in self-defence at all. There are degrees of torture as of murder. Unclarities simply play into the hands of would-be torturers. Downie is unsuccessful in deriving the duty of doctors not to be involved in torture from an analysis of the word `doctor'. It may be contrary to the role-duty of doctors to participate in torture; but there might be other duties which overrode this role-duty. The right approach is to ask what principles for the conduct of doctors have the highest acceptance-utility, or, as Kant might have equivalently put it, what the impartial furtherance of everyone's ends demands. This approach yields the result that torture (suitably defined) should be banned absolutely. It also yields prescriptions for the conduct of doctors where, in spite of them, torture is taking place. PMID:8230144
Chuprom, Julalak; Bovornreungroj, Preeyanuch; Ahmad, Mehraj; Kantachote, Duangporn; Dueramae, Sawitree
2016-06-01
A new potent halophilic protease producer, Halobacterium sp. strain LBU50301 was isolated from salt-fermented fish samples ( budu ) and identified by phenotypic analysis, and 16S rDNA gene sequencing. Thereafter, sequential statistical strategy was used to optimize halophilic protease production from Halobacterium sp. strain LBU50301 by shake-flask fermentation. The classical one-factor-at-a-time (OFAT) approach determined gelatin was the best nitrogen source. Based on Plackett - Burman (PB) experimental design; gelatin, MgSO 4 ·7H 2 O, NaCl and pH significantly influenced the halophilic protease production. Central composite design (CCD) determined the optimum level of medium components. Subsequently, an 8.78-fold increase in corresponding halophilic protease yield (156.22 U/mL) was obtained, compared with that produced in the original medium (17.80 U/mL). Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values. An overall 13-fold increase in halophilic protease yield was achieved using a 3 L laboratory fermenter and optimized medium (231.33 U/mL).
NASA Astrophysics Data System (ADS)
Bagán, H.; Tarancón, A.; Rauret, G.; García, J. F.
2008-07-01
The quenching parameters used to model detection efficiency variations in scintillation measurements have not evolved since the decade of 1970s. Meanwhile, computer capabilities have increased enormously and ionization quenching has appeared in practical measurements using plastic scintillation. This study compares the results obtained in activity quantification by plastic scintillation of 14C samples that contain colour and ionization quenchers, using classical (SIS, SCR-limited, SCR-non-limited, SIS(ext), SQP(E)) and evolved (MWA-SCR and WDW) parameters and following three calibration approaches: single step, which does not take into account the quenching mechanism; two steps, which takes into account the quenching phenomena; and multivariate calibration. Two-step calibration (ionization followed by colour) yielded the lowest relative errors, which means that each quenching phenomenon must be specifically modelled. In addition, the sample activity was quantified more accurately when the evolved parameters were used. Multivariate calibration-PLS also yielded better results than those obtained using classical parameters, which confirms that the quenching phenomena must be taken into account. The detection limits for each calibration method and each parameter were close to those obtained theoretically using the Currie approach.
Joint Bayesian inference for near-surface explosion yield
NASA Astrophysics Data System (ADS)
Bulaevskaya, V.; Ford, S. R.; Ramirez, A. L.; Rodgers, A. J.
2016-12-01
A near-surface explosion generates seismo-acoustic motion that is related to its yield. However, the recorded motion is affected by near-source effects such as depth-of-burial, and propagation-path effects such as variable geology. We incorporate these effects in a forward model relating yield to seismo-acoustic motion, and use Bayesian inference to estimate yield given recordings of the seismo-acoustic wavefield. The Bayesian approach to this inverse problem allows us to obtain the probability distribution of plausible yield values and thus quantify the uncertainty in the yield estimate. Moreover, the sensitivity of the acoustic signal falls as a function of the depth-of-burial, while the opposite relationship holds for the seismic signal. Therefore, using both the acoustic and seismic wavefield data allows us to avoid the trade-offs associated with using only one of these signals alone. In addition, our inference framework allows for correlated features of the same data type (seismic or acoustic) to be incorporated in the estimation of yield in order to make use of as much information from the same waveform as possible. We demonstrate our approach with a historical dataset and a contemporary field experiment.
De la Torre, Daniel; Sierra, Maria Jose
2007-01-01
The approach developed by Fuhrer in 1995 to estimate wheat yield losses induced by ozone and modulated by the soil water content (SWC) was applied to the data on Catalonian wheat yields. The aim of our work was to apply this approach and adjust it to Mediterranean environmental conditions by means of the necessary corrections. The main objective pursued was to prove the importance of soil water availability in the estimation of relative wheat yield losses as a factor that modifies the effects of tropospheric ozone on wheat, and to develop the algorithms required for the estimation of relative yield losses, adapted to the Mediterranean environmental conditions. The results show that this is an easy way to estimate relative yield losses just using meteorological data, without using ozone fluxes, which are much more difficult to calculate. Soil water availability is very important as a modulating factor of the effects of ozone on wheat; when soil water availability decreases, almost twice the amount of accumulated exposure to ozone is required to induce the same percentage of yield loss as in years when soil water availability is high. PMID:17619747
Varga, Peter; Inzana, Jason A; Schwiedrzik, Jakob; Zysset, Philippe K; Gueorguiev, Boyko; Blauth, Michael; Windolf, Markus
2017-05-01
High incidence and increased mortality related to secondary, contralateral proximal femoral fractures may justify invasive prophylactic augmentation that reinforces the osteoporotic proximal femur to reduce fracture risk. Bone cement-based approaches (femoroplasty) may deliver the required strengthening effect; however, the significant variation in the results of previous studies calls for a systematic analysis and optimization of this method. Our hypothesis was that efficient generalized augmentation strategies can be identified via computational optimization. This study investigated, by means of finite element analysis, the effect of cement location and volume on the biomechanical properties of fifteen proximal femora in sideways fall. Novel cement cloud locations were developed using the principles of bone remodeling and compared to the "single central" location that was previously reported to be optimal. The new augmentation strategies provided significantly greater biomechanical benefits compared to the "single central" cement location. Augmenting with approximately 12ml of cement in the newly identified location achieved increases of 11% in stiffness, 64% in yield force, 156% in yield energy and 59% in maximum force, on average, compared to the non-augmented state. The weaker bones experienced a greater biomechanical benefit from augmentation than stronger bones. The effect of cement volume on the biomechanical properties was approximately linear. Results of the "single central" model showed good agreement with previous experimental studies. These findings indicate enhanced potential of cement-based prophylactic augmentation using the newly developed cementing strategy. Future studies should determine the required level of strengthening and confirm these numerical results experimentally. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV
NASA Astrophysics Data System (ADS)
Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.
2011-04-01
When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.
Cross, Alan; Collard, Mark; Nelson, Andrew
2008-01-01
The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached. PMID:18560580
Cross, Alan; Collard, Mark; Nelson, Andrew
2008-06-18
The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached.
Jin, Feng-Jie; Katayama, Takuya; Maruyama, Jun-Ichi; Kitamoto, Katsuhiko
2016-11-01
Genomic mapping of mutations using next-generation sequencing technologies has facilitated the identification of genes contributing to fundamental biological processes, including human diseases. However, few studies have used this approach to identify mutations contributing to heterologous protein production in industrial strains of filamentous fungi, such as Aspergillus oryzae. In a screening of A. oryzae strains that hyper-produce human lysozyme (HLY), we previously isolated an AUT1 mutant that showed higher production of various heterologous proteins; however, the underlying factors contributing to the increased heterologous protein production remained unclear. Here, using a comparative genomic approach performed with whole-genome sequences, we attempted to identify the genes responsible for the high-level production of heterologous proteins in the AUT1 mutant. The comparative sequence analysis led to the detection of a gene (AO090120000003), designated autA, which was predicted to encode an unknown cytoplasmic protein containing an alpha/beta-hydrolase fold domain. Mutation or deletion of autA was associated with higher production levels of HLY. Specifically, the HLY yields of the autA mutant and deletion strains were twofold higher than that of the control strain during the early stages of cultivation. Taken together, these results indicate that combining classical mutagenesis approaches with comparative genomic analysis facilitates the identification of novel genes involved in heterologous protein production in filamentous fungi.
Campbell, Jonathan D; Zerzan, Judy; Garrison, Louis P; Libby, Anne M
2013-04-01
Comparative-effectiveness research (CER) at the population level is missing standardized approaches to quantify and weigh interventions in terms of their clinical risks, benefits, and uncertainty. We proposed an adapted CER framework for population decision making, provided example displays of the outputs, and discussed the implications for population decision makers. Building on decision-analytical modeling but excluding cost, we proposed a 2-step approach to CER that explicitly compared interventions in terms of clinical risks and benefits and linked this evidence to the quality-adjusted life year (QALY). The first step was a traditional intervention-specific evidence synthesis of risks and benefits. The second step was a decision-analytical model to simulate intervention-specific progression of disease over an appropriate time. The output was the ability to compare and quantitatively link clinical outcomes with QALYs. The outputs from these CER models include clinical risks, benefits, and QALYs over flexible and relevant time horizons. This approach yields an explicit, structured, and consistent quantitative framework to weigh all relevant clinical measures. Population decision makers can use this modeling framework and QALYs to aid in their judgment of the individual and collective risks and benefits of the alternatives over time. Future research should study effective communication of these domains for stakeholders. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.
Pathways between primary production and fisheries yields of large marine ecosystems.
Friedland, Kevin D; Stock, Charles; Drinkwater, Kenneth F; Link, Jason S; Leaf, Robert T; Shank, Burton V; Rose, Julie M; Pilskaln, Cynthia H; Fogarty, Michael J
2012-01-01
The shift in marine resource management from a compartmentalized approach of dealing with resources on a species basis to an approach based on management of spatially defined ecosystems requires an accurate accounting of energy flow. The flow of energy from primary production through the food web will ultimately limit upper trophic-level fishery yields. In this work, we examine the relationship between yield and several metrics including net primary production, chlorophyll concentration, particle-export ratio, and the ratio of secondary to primary production. We also evaluate the relationship between yield and two additional rate measures that describe the export of energy from the pelagic food web, particle export flux and mesozooplankton productivity. We found primary production is a poor predictor of global fishery yields for a sample of 52 large marine ecosystems. However, chlorophyll concentration, particle-export ratio, and the ratio of secondary to primary production were positively associated with yields. The latter two measures provide greater mechanistic insight into factors controlling fishery production than chlorophyll concentration alone. Particle export flux and mesozooplankton productivity were also significantly related to yield on a global basis. Collectively, our analyses suggest that factors related to the export of energy from pelagic food webs are critical to defining patterns of fishery yields. Such trophic patterns are associated with temperature and latitude and hence greater yields are associated with colder, high latitude ecosystems.
Pathways between Primary Production and Fisheries Yields of Large Marine Ecosystems
Friedland, Kevin D.; Stock, Charles; Drinkwater, Kenneth F.; Link, Jason S.; Leaf, Robert T.; Shank, Burton V.; Rose, Julie M.; Pilskaln, Cynthia H.; Fogarty, Michael J.
2012-01-01
The shift in marine resource management from a compartmentalized approach of dealing with resources on a species basis to an approach based on management of spatially defined ecosystems requires an accurate accounting of energy flow. The flow of energy from primary production through the food web will ultimately limit upper trophic-level fishery yields. In this work, we examine the relationship between yield and several metrics including net primary production, chlorophyll concentration, particle-export ratio, and the ratio of secondary to primary production. We also evaluate the relationship between yield and two additional rate measures that describe the export of energy from the pelagic food web, particle export flux and mesozooplankton productivity. We found primary production is a poor predictor of global fishery yields for a sample of 52 large marine ecosystems. However, chlorophyll concentration, particle-export ratio, and the ratio of secondary to primary production were positively associated with yields. The latter two measures provide greater mechanistic insight into factors controlling fishery production than chlorophyll concentration alone. Particle export flux and mesozooplankton productivity were also significantly related to yield on a global basis. Collectively, our analyses suggest that factors related to the export of energy from pelagic food webs are critical to defining patterns of fishery yields. Such trophic patterns are associated with temperature and latitude and hence greater yields are associated with colder, high latitude ecosystems. PMID:22276100
Porto, Paolo; Walling, Des E; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus
2014-12-01
Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly (137)Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954-1998 with that for the period 1999-2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the (137)Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure for such calculations, key factors influencing the uncertainty of the estimates were identified and a procedure developed. The results of the study demonstrated that there had been no significant change in mean annual soil loss in recent years and this was consistent with the information provided by the estimates of sediment yield from the catchment for the same periods. The study demonstrates the potential for using a re-sampling technique to document recent changes in soil redistribution rates. Copyright © 2014. Published by Elsevier Ltd.
Han, Yeji; Kim, Hyun Jung; Kong, Kyoung Ae; Kim, Soo Jung; Lee, Su Hwan; Ryu, Yon Ju; Lee, Jin Hwa; Kim, Yookyoung; Shim, Sung Shine
2018-01-01
Background Advances in bronchoscopy and CT-guided lung biopsy have improved the evaluation of small pulmonary lesions (PLs), leading to an increase in preoperative histological diagnosis. We aimed to evaluate the efficacy and safety of transbronchial lung biopsy using radial endobronchial ultrasound and virtual bronchoscopic navigation (TBLB-rEBUS&VBN) and CT-guided transthoracic needle biopsy (CT-TNB) for tissue diagnosis of small PLs. Methods A systematic search was performed in five electronic databases, including MEDLINE, EMBASE, Cochrane Library Central Register of Controlled Trials, Web of Science, and Scopus, for relevant studies in May 2016; the selected articles were assessed using meta-analysis. The articles were limited to those published after 2000 that studied small PLs ≤ 3 cm in diameter. Results From 7345 records, 9 articles on the bronchoscopic (BR) approach and 15 articles on the percutaneous (PC) approach were selected. The pooled diagnostic yield was 75% (95% confidence interval [CI], 69–80) using the BR approach and 93% (95% CI, 90–96) using the PC approach. For PLs ≤ 2 cm, the PC approach (pooled diagnostic yield: 92%, 95% CI: 88–95) was superior to the BR approach (66%, 95% CI: 55–76). However, for PLs > 2 cm but ≤ 3 cm, the diagnostic yield using the BR approach was improved to 81% (95% CI, 75–85). Complications of pneumothorax and hemorrhage were rare with the BR approach but common with the PC approach. Conclusions CT-TNB was superior to TBLB-rEBUS&VBN for the evaluation of small PLs. However, for lesions greater than 2 cm, the BR approach may be considered considering its diagnostic yield of over 80% and the low risk of procedure-related complications. PMID:29357388
A fuzzy logic approach to modeling the underground economy in Taiwan
NASA Astrophysics Data System (ADS)
Yu, Tiffany Hui-Kuang; Wang, David Han-Min; Chen, Su-Jane
2006-04-01
The size of the ‘underground economy’ (UE) is valuable information in the formulation of macroeconomic and fiscal policy. This study applies fuzzy set theory and fuzzy logic to model Taiwan's UE over the period from 1960 to 2003. Two major factors affecting the size of the UE, the effective tax rate and the degree of government regulation, are used. The size of Taiwan's UE is scaled and compared with those of other models. Although our approach yields different estimates, similar patterns and leading are exhibited throughout the period. The advantage of applying fuzzy logic is twofold. First, it can avoid the complex calculations in conventional econometric models. Second, fuzzy rules with linguistic terms are easy for human to understand.
Combining electromagnetic gyro-kinetic particle-in-cell simulations with collisions
NASA Astrophysics Data System (ADS)
Slaby, Christoph; Kleiber, Ralf; Könies, Axel
2017-09-01
It has been an open question whether for electromagnetic gyro-kinetic particle-in-cell (PIC) simulations pitch-angle collisions and the recently introduced pullback transformation scheme (Mishchenko et al., 2014; Kleiber et al., 2016) are consistent. This question is positively answered by comparing the PIC code EUTERPE with an approach based on an expansion of the perturbed distribution function in eigenfunctions of the pitch-angle collision operator (Legendre polynomials) to solve the electromagnetic drift-kinetic equation with collisions in slab geometry. It is shown how both approaches yield the same results for the frequency and damping rate of a kinetic Alfvén wave and how the perturbed distribution function is substantially changed by the presence of pitch-angle collisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anticic, T.; Baatar, B.; Bartke, J.
Production of d, t, and 3He nuclei in central Pb + Pb interactions was studied at five collision energies (more » $$\\sqrt{s}$$$_ {NN}$$= 6.3, 7.6, 8.8, 12.3, and 17.3 GeV) with the NA49 detector at the CERN Super Proton Synchrotron.Transverse momentum spectra, rapidity distributions, and particle ratios were measured. Yields are compared to predictions of statistical models. Phase-space distributions of light nuclei are discussed and compared to those of protons in the context of a coalescence approach. Finally, the coalescence parameters B 2 and B 3, as well as coalescence radii for d and 3He were determined as a function of transverse mass at all energies.« less
NASA Astrophysics Data System (ADS)
Hartmann, J. M.; Veillerot, M.; Prévitali, B.
2017-10-01
We have compared co-flow and cyclic deposition/etch processes for the selective epitaxial growth of Si:P layers. High growth rates, relatively low resistivities and significant amounts of tensile strain (up to 10 nm min-1, 0.55 mOhm cm and a strain equivalent to 1.06% of substitutional C in Si:C layers) were obtained at 700 °C, 760 Torr with a co-flow approach and a SiH2Cl2 + PH3 + HCl chemistry. This approach was successfully used to thicken the sources and drains regions of n-type fin-shaped Field Effect Transistors. Meanwhile, the (Si2H6 + PH3/HCl + GeH4) CDE process evaluated yielded at 600 °C, 80 Torr even lower resistivities (0.4 mOhm cm, typically), at the cost however of the tensile strain which was lost due to (i) the incorporation of Ge atoms (1.5%, typically) into the lattice during the selective etch steps and (ii) a reduction by a factor of two of the P atomic concentration in CDE layers compared to that in layers grown in a single step (5 × 1020 cm-3 compared to 1021 cm-3).
Incorporating comparative genomics into the design-test-learn cycle of microbial strain engineering.
Sardi, Maria; Gasch, Audrey P
2017-08-01
Engineering microbes with new properties is an important goal in industrial engineering, to establish biological factories for production of biofuels, commodity chemicals and pharmaceutics. But engineering microbes to produce new compounds with high yield remains a major challenge toward economically viable production. Incorporating several modern approaches, including synthetic and systems biology, metabolic modeling and regulatory rewiring, has proven to significantly advance industrial strain engineering. This review highlights how comparative genomics can also facilitate strain engineering, by identifying novel genes and pathways, regulatory mechanisms and genetic background effects for engineering. We discuss how incorporating comparative genomics into the design-test-learn cycle of strain engineering can provide novel information that complements other engineering strategies. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Yin, Hua; Ma, Yanlin; Deng, Yang; Xu, Zhenbo; Liu, Junyan; Zhao, Junfeng; Dong, Jianjun; Yu, Junhong; Chang, Zongming
2016-08-01
Genome shuffling is an efficient and promising approach for the rapid improvement of microbial phenotypes. In this study, genome shuffling was applied to enhance the yield of glutathione produced by Saccharomyces cerevisiae YS86. Six isolates with subtle improvements in glutathione yield were obtained from populations generated by ultraviolet (UV) irradiation and nitrosoguanidine (NTG) mutagenesis. These yeast strains were then subjected to recursive pool-wise protoplast fusion. A strain library that was likely to yield positive colonies was created by fusing the lethal protoplasts obtained from both UV irradiation and heat treatments. After two rounds of genome shuffling, a high-yield recombinant YSF2-19 strain that exhibited 3.2- and 3.3-fold increases in glutathione production in shake flask and fermenter respectively was obtained. Comparative analysis of synthetase gene expression was conducted between the initial and shuffled strains using FQ (fluorescent quantitation) RT-PCR (reverse transcription polymerase chain reaction). Delta CT (threshold cycle) relative quantitation analysis revealed that glutathione synthetase gene (GSH-I) expression at the transcriptional level in the YSF2-19 strain was 9.9-fold greater than in the initial YS86. The shuffled yeast strain has a potential application in brewing, other food, and pharmaceutical industries. Simultaneously, the analysis of improved phenotypes will provide more valuable data for inverse metabolic engineering. Copyright © 2016 Elsevier B.V. All rights reserved.
A Complex Systems Approach to Causal Discovery in Psychiatry.
Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin
2016-01-01
Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.
Robotic Inguinal Hernia Repair: Technique and Early Experience.
Arcerito, Massimo; Changchien, Eric; Bernal, Oscar; Konkoly-Thege, Adam; Moon, John
2016-10-01
Laparoscopic inguinal hernia repair has been shown to have multiple advantages compared with open repair such as less postoperative pain and earlier resume of daily activities with a comparable recurrence rate. We speculate robotic inguinal hernia repair may yield equivalent benefits, while providing the surgeon added dexterity. One hundred consecutive robotic inguinal hernia repairs with mesh were performed with a mean age of 56 years (25-96). Fifty-six unilateral hernias and 22 bilateral hernias were repaired amongst 62 males and 16 females. Polypropylene mesh was used for reconstruction. All but, two patients were completed robotically. Mean operative time was 52 minutes per hernia repair (45-67). Five patients were admitted overnight based on their advanced age. Regular diet was resumed immediately. Postoperative pain was minimal and regular activity was achieved after an average of four days. One patient recurred after three months in our earlier experience and he was repaired robotically. Mean follow-up time was 12 months. These data, compared with laparoscopic approach, suggest similar recurrence rates and postoperative pain. We believe comparative studies with laparoscopic approach need to be performed to assess the role robotic surgery has in the treatment of inguinal hernia repair.
Chen, Jie; Zhou, Ling; Tan, Chong Kiat; Yeung, Ying-Yeung
2012-01-20
A facile and enantioselective approach toward 3,4-dihydroisocoumarin was developed. The method involved an amino-thiocarbamate catalyzed enantioselective bromocyclization of styrene-type carboxylic acids, yielding 3-bromo-3,4-dihydroisocoumarins with good yields and ee's. 3-Bromo-3,4-dihydroisocoumarins are versatile building blocks for various dihydroisocoumarin derivatives in which the Br group can readily be modified to achieve biologically important 4-O-type and 4-N-type 3,4-dihydroisocoumarin systems. In addition, studies indicated that, by refining some parameters, the synthetically useful 5-exo phthalide products could be achieved with good yields and ee's.
Development of a Cadaveric Model for Arthrocentesis.
MacIver, Melissa A; Johnson, Matthew
2015-01-01
This article reports the development of a novel cadaveric model for future use in teaching arthrocentesis. In the clinical setting, animal safety is essential and practice is thus limited. Objectives of the study were to develop and compare a model to an unmodified cadaver by injecting one of two types of fluids to increase yield. The two fluids injected, mineral oil (MO) and hypertonic saline (HS), were compared to determine any difference on yield. Lastly, aspiration immediately after (T1) or three hours after (T2) injection were compared to determine any effect on diagnostic yield. Joints used included the stifle, elbow, and carpus in eight medium dog cadavers. Arthrocentesis was performed before injection (control) and yield measured. Test joints were injected with MO or HS and yield measured after range of motion (T1) and three hours post injection to simulate lab preparation (T2). Both models had statistically significantly higher yield compared with the unmodified cadaver in all joints at T1 and T2 (p<.05) with the exception of HST2 carpus. T2 aspiration had a statistically significant lower yield when compared to T1HS carpus, T1HS elbow, and T1MO carpus. Overall, irrespective of fluid volume or type, percent yield was lower in T2 compared to T1. No statistically significant difference was seen between HS and MO in most joints with the exception of MOT1 stifle and HST2 elbow. Within the time frame assessed, both models were acceptable. However, HS arthrocentesis models proved appropriate for student trial due to the difficult aspirations with MO.
NASA Astrophysics Data System (ADS)
Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.
2015-11-01
In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs) by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-Rocky Mountain Biogenic Aerosol Study) ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misattribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed, yielding an explicit cluster attribution for each particle and improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.
NASA Astrophysics Data System (ADS)
Krstulović-Opara, Lovre; Surjak, Martin; Vesenjak, Matej; Tonković, Zdenko; Kodvanj, Janoš; Domazet, Željko
2015-11-01
To investigate the applicability of infrared thermography as a tool for acquiring dynamic yielding in metals, a comparison of infrared thermography with three dimensional digital image correlation has been made. Dynamical tension tests and three point bending tests of aluminum alloys have been performed to evaluate results obtained by IR thermography in order to detect capabilities and limits for these two methods. Both approaches detect pastification zone migrations during the yielding process. The results of the tension test and three point bending test proved the validity of the IR approach as a method for evaluating the dynamic yielding process when used on complex structures such as cellular porous materials. The stability of the yielding process in the three point bending test, as contrary to the fluctuation of the plastification front in the tension test, is of great importance for the validation of numerical constitutive models. The research proved strong performance, robustness and reliability of the IR approach when used to evaluate yielding during dynamic loading processes, while the 3D DIC method proved to be superior in the low velocity loading regimes. This research based on two basic tests, proved the conclusions and suggestions presented in our previous research on porous materials where middle wave infrared thermography was applied.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Performance of FFT methods in local gravity field modelling
NASA Technical Reports Server (NTRS)
Forsberg, Rene; Solheim, Dag
1989-01-01
Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achasov, N. N., E-mail: achasov@math.nsc.ru
2011-03-15
The approach to the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decay study is presented in detail, based on the sum rules for the Z {yields} cc-bar {yields} {gamma}{gamma}* and Z {yields} bb-bar {yields} {gamma}{gamma}* amplitudes and their derivatives. The branching ratios of the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decays are calculated for different hypotheses on saturation of the sum rules. The lower bounds of {Sigma}{sub {psi}} BR(Z {yields} {gamma}{psi}) = 1.95 Multiplication-Sign 10{sup -7} and {Sigma}{sub {upsilon}} BR(Z {yields} {gamma}Y) = 7.23 Multiplication-Sign 10{sup -7} are found. Deviations from the lower bounds are discussed, including the possibilitymore » of BR(Z {yields} {gamma}J/{psi}(1S)) {approx} BR(Z {yields} {gamma}Y(1S)) {approx} 10{sup -6}, that could be probably measured in LHC. The angular distributions in the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decays are also calculated.« less
Is There an Interest to Use Deuteron Beams to Produce Non-Conventional Radionuclides?
Alliot, Cyrille; Audouin, Nadia; Barbet, Jacques; Bonraisin, Anne-Cecile; Bossé, Valérie; Bourdeau, Cécile; Bourgeois, Mickael; Duchemin, Charlotte; Guertin, Arnaud; Haddad, Ferid; Huclier-Markai, Sandrine; Kerdjoudj, Rabah; Laizé, Johan; Métivier, Vincent; Michel, Nathalie; Mokili, Marcel; Pageau, Mickael; Vidal, Aurélien
2015-01-01
With the recent interest on the theranostic approach, there has been a renewed interest for alternative radionuclides in nuclear medicine. They can be produced using common production routes, i.e., using protons accelerated by biomedical cyclotrons or neutrons produced in research reactors. However, in some cases, it can be more valuable to use deuterons as projectiles. In the case of Cu-64, smaller quantities of the expensive target material, Ni-64, are used with deuterons as compared with protons for the same produced activity. For the Sc-44m/Sc-44g generator, deuterons afford a higher Sc-44m production yield than with protons. Finally, in the case of Re-186g, deuterons lead to a production yield five times higher than protons. These three examples show that it is of interest to consider not only protons or neutrons but also deuterons to produce alternative radionuclides. PMID:26029696
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.
Zou, Zhaoyong; Lin, Kaili; Chen, Lei; Chang, Jiang
2012-11-01
Herein, carbonated hydroxyapatite (CHAp) nanopowders were synthesized via sonochemistry-assisted microwave process. The influences of microwave and ultrasonic irradiation on the crystallinity, morphology, yield, Ca/P molar ratio, specific surface area and dispersibility were investigated and compared with the conventional precipitation method. The results showed that sonochemistry-assisted microwave process significantly increased the synthetic efficiency. The well-crystallized nanopowders could be obtained at high yield of 98.8% in ultra-short-period of 5min. In addition, the crystallization process was promoted with the increase of ultrasonic and microwave power and the reaction time during the sonochemistry-assisted microwave process. The sonochemistry assistance also remarkably increased the specific surface area and dispersibility of the as-obtained products. These results suggest that the sonochemistry-assisted microwave process is an effective approach to synthesize CHAp with high efficiency. Copyright © 2012 Elsevier B.V. All rights reserved.
A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.
Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei
2017-10-01
The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.
Fan, HuiYin; Dumont, Marie-Josée; Simpson, Benjamin K
2017-11-01
Gelatin from salmon ( Salmo salar ) skin with high molecular weight protein chains ( α -chains) was extracted using trypsin-aided process. Response surface methodology was used to optimise the extraction parameters. Yield, hydroxyproline content and protein electrophoretic profile via sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis of gelatin were used as responses in the optimization study. The optimum conditions were determined as: trypsin concentration at 1.49 U/g; extraction temperature at 45 °C; and extraction time at 6 h 16 min. This response surface optimized model was significant and produced an experimental value (202.04 ± 8.64%) in good agreement with the predicted value (204.19%). Twofold higher yields of gelatin with high molecular weight protein chains were achieved in the optimized process with trypsin treatment when compared to the process without trypsin.
Self-assembling choline mimicks with enhanced binding affinities to C-LytA protein
Shi, Yang; Zhou, Hao; Zhang, Xiaoli; Wang, Jingyu; Long, Jiafu; Yang, Zhimou; Ding, Dan
2014-01-01
Streptococcus pneumoniae (pneumococcus) causes multiple illnesses in humans. Exploration of effective inhibitors with multivalent attachment sites for choline-binding modules is of great importance to reduce the pneumococcal virulence. In this work, we successfully developed two self-assembling choline mimicks, Ada-GFFYKKK' and Nap-GFFYKKK', which have the abilities to self-assemble into nanoparticles and nanofibers, respectively, yielding multivalent architectures. Additionally, the best characterized choline-binding module, C-terminal moiety of the pneumococcal cell-wall amidase LytA (C-LytA) was also produced with high purity. The self-assembling Ada-GFFYKKK' and Nap-GFFYKKK' show strong interactions with C-LytA, which possess much higher association constant values to the choline-binding modules as compared to the individual peptide Fmoc-K'. This study thus provides a self-assembly approach to yield inhibitors that are very promising for reducing the pneumococcal virulence. PMID:25315737
Noise-induced escape in an excitable system
NASA Astrophysics Data System (ADS)
Khovanov, I. A.; Polovinkin, A. V.; Luchinsky, D. G.; McClintock, P. V. E.
2013-03-01
We consider the stochastic dynamics of escape in an excitable system, the FitzHugh-Nagumo (FHN) neuronal model, for different classes of excitability. We discuss, first, the threshold structure of the FHN model as an example of a system without a saddle state. We then develop a nonlinear (nonlocal) stability approach based on the theory of large fluctuations, including a finite-noise correction, to describe noise-induced escape in the excitable regime. We show that the threshold structure is revealed via patterns of most probable (optimal) fluctuational paths. The approach allows us to estimate the escape rate and the exit location distribution. We compare the responses of a monostable resonator and monostable integrator to stochastic input signals and to a mixture of periodic and stochastic stimuli. Unlike the commonly used local analysis of the stable state, our nonlocal approach based on optimal paths yields results that are in good agreement with direct numerical simulations of the Langevin equation.
Implementing a Sleep Health Education and Sleep Disorders Screening Program in Fire Departments
Barger, Laura K.; O’Brien, Conor S.; Rajaratnam, Shantha M.W.; Qadri, Salim; Sullivan, Jason P.; Wang, Wei; Czeisler, Charles A.; Lockley, Steven W.
2016-01-01
Objective: The objective of this study is to compare three methods of administering a sleep health program (SHP) in fire departments. Methods: An SHP, comprising sleep health education and screening for common sleep disorders, was implemented in eight fire departments using three approaches: expert-led, train-the-trainer, and online. Participation rates, knowledge assessments, surveys, and focus group interviews were analyzed to assess the reach and effectiveness of the methodologies. Results: The Expert-led SHP had the highest participation rate, greatest improvement in knowledge scores, and prompted more firefighters to seek clinical sleep disorder evaluations (41%) than the other approaches (20 to 25%). Forty-two percent of focus group participants reported changing their sleep behaviors. Conclusion: All approaches yielded reasonable participation rates, but expert-led programs had the greatest reach and effectiveness in educating and screening firefighters for sleep disorders. PMID:27035103
Moura, Lidia Mvr; Westover, M Brandon; Kwasnik, David; Cole, Andrew J; Hsu, John
2017-01-01
The elderly population faces an increasing number of cases of chronic neurological conditions, such as epilepsy and Alzheimer's disease. Because the elderly with epilepsy are commonly excluded from randomized controlled clinical trials, there are few rigorous studies to guide clinical practice. When the elderly are eligible for trials, they either rarely participate or frequently have poor adherence to therapy, thus limiting both generalizability and validity. In contrast, large observational data sets are increasingly available, but are susceptible to bias when using common analytic approaches. Recent developments in causal inference-analytic approaches also introduce the possibility of emulating randomized controlled trials to yield valid estimates. We provide a practical example of the application of the principles of causal inference to a large observational data set of patients with epilepsy. This review also provides a framework for comparative-effectiveness research in chronic neurological conditions.
Simoncelli, Sabrina; Roller, Eva-Maria; Urban, Patrick; Schreiber, Robert; Turberfield, Andrew J; Liedl, Tim; Lohmüller, Theobald
2016-11-22
DNA origami is a powerful approach for assembling plasmonic nanoparticle dimers and Raman dyes with high yields and excellent positioning control. Here we show how optothermal-induced shrinking of a DNA origami template can be employed to control the gap sizes between two 40 nm gold nanoparticles in a range from 1 to 2 nm. The high field confinement achieved with this optothermal approach was demonstrated by detection of surface-enhanced Raman spectroscopy (SERS) signals from single molecules that are precisely placed within the DNA origami template that spans the nanoparticle gap. By comparing the SERS intensity with respect to the field enhancement in the plasmonic hot-spot region, we found good agreement between measurement and theory. Our straightforward approach for the fabrication of addressable plasmonic nanosensors by DNA origami demonstrates a path toward future sensing applications with single-molecule resolution.
Culturally Tailored Depression/Suicide Prevention in Latino Youth: Community Perspectives.
Ford-Paz, Rebecca E; Reinhard, Christine; Kuebbeler, Andrea; Contreras, Richard; Sánchez, Bernadette
2015-10-01
Latino adolescents are at elevated risk for depression and suicide compared to other ethnic groups. Project goals were to gain insight from community leaders about depression risk factors particular to Latino adolescents and generate innovative suggestions to improve cultural relevance of prevention interventions. This project utilized a CBPR approach to enhance cultural relevance, acceptability, and utility of the findings and subsequent program development. Two focus groups of youth and youth-involved Latino community leaders (n = 18) yielded three overarching themes crucial to a culturally tailored depression prevention intervention: (1) utilize a multipronged and sustainable intervention approach, (2) raise awareness about depression in culturally meaningful ways, and (3) promote Latino youth's social connection and cultural enrichment activities. Findings suggest that both adaptation of existing prevention programs and development of hybrid approaches may be necessary to reduce depression/suicide disparities for Latino youth. One such hybrid program informed by community stakeholders is described.
NASA Astrophysics Data System (ADS)
Kalthoff, Mona; Keim, Frederik; Krull, Holger; Uhrig, Götz S.
2017-05-01
The density matrix formalism and the equation of motion approach are two semi-analytical methods that can be used to compute the non-equilibrium dynamics of correlated systems. While for a bilinear Hamiltonian both formalisms yield the exact result, for any non-bilinear Hamiltonian a truncation is necessary. Due to the fact that the commonly used truncation schemes differ for these two methods, the accuracy of the obtained results depends significantly on the chosen approach. In this paper, both formalisms are applied to the quantum Rabi model. This allows us to compare the approximate results and the exact dynamics of the system and enables us to discuss the accuracy of the approximations as well as the advantages and the disadvantages of both methods. It is shown to which extent the results fulfill physical requirements for the observables and which properties of the methods lead to unphysical results.
Barger, Laura K; O'Brien, Conor S; Rajaratnam, Shantha M W; Qadri, Salim; Sullivan, Jason P; Wang, Wei; Czeisler, Charles A; Lockley, Steven W
2016-06-01
The objective of this study is to compare three methods of administering a sleep health program (SHP) in fire departments. An SHP, comprising sleep health education and screening for common sleep disorders, was implemented in eight fire departments using three approaches: expert-led, train-the-trainer, and online. Participation rates, knowledge assessments, surveys, and focus group interviews were analyzed to assess the reach and effectiveness of the methodologies. The Expert-led SHP had the highest participation rate, greatest improvement in knowledge scores, and prompted more firefighters to seek clinical sleep disorder evaluations (41%) than the other approaches (20 to 25%). Forty-two percent of focus group participants reported changing their sleep behaviors. All approaches yielded reasonable participation rates, but expert-led programs had the greatest reach and effectiveness in educating and screening firefighters for sleep disorders.
Treatment of eating disorders in child and adolescent psychiatry.
Herpertz-Dahlmann, Beate
2017-11-01
Recent research on the multimodal treatment of eating disorders in child and adolescent psychiatry has yielded a significant increase in randomized controlled trials and systematic reviews. This review aims to present relevant findings published during the last 2 years related to medical and psychological treatment of anorexia nervosa, bulimia nervosa and avoidant/restrictive food intake disorder (ARFID). For anorexia nervosa, recent reports described the efficacy of different treatment settings, lengths of hospital stay and high vs. low-calorie refeeding programmes. For both anorexia and bulimia nervosa, a number of randomized controlled trials comparing individual and family-oriented treatment approaches were published. For the newly defined ARFID, only very preliminary results on possible treatment approaches implying a multidisciplinary treatment programme were obtained. Although there is some evidence of the effectiveness of new child and adolescent psychiatric treatment approaches to eating disorders, the relapse rate remains very high, and there is an urgent need for ongoing intensive research.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of the vehicle on the left will yield right-of-way. When entering an intersection without traffic.... (b) Drivers turning left within an intersection will yield right-of-way to vehicles approaching from...
Code of Federal Regulations, 2010 CFR
2010-07-01
... of the vehicle on the left will yield right-of-way. When entering an intersection without traffic.... (b) Drivers turning left within an intersection will yield right-of-way to vehicles approaching from...
Code of Federal Regulations, 2011 CFR
2011-07-01
... of the vehicle on the left will yield right-of-way. When entering an intersection without traffic.... (b) Drivers turning left within an intersection will yield right-of-way to vehicles approaching from...
Code of Federal Regulations, 2014 CFR
2014-07-01
... of the vehicle on the left will yield right-of-way. When entering an intersection without traffic.... (b) Drivers turning left within an intersection will yield right-of-way to vehicles approaching from...
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Assessing the accuracy of different simplified frictional rolling contact algorithms
NASA Astrophysics Data System (ADS)
Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.
2012-01-01
This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Biobased alkylphenols from lignins via a two-step pyrolysis - Hydrodeoxygenation approach.
de Wild, P J; Huijgen, W J J; Kloekhorst, A; Chowdari, R K; Heeres, H J
2017-04-01
Five technical lignins (three organosolv, Kraft and soda lignin) were depolymerised to produce monomeric biobased aromatics, particularly alkylphenols, by a new two-stage thermochemical approach consisting of dedicated pyrolysis followed by catalytic hydrodeoxygenation (HDO) of the resulting pyrolysis oils. Pyrolysis yielded a mixture of guaiacols, catechols and, optionally, syringols in addition to alkylphenols. HDO with heterogeneous catalysts (Ru/C, CoMo/alumina, phosphided NiMO/C) effectively directed the product mixture towards alkylphenols by, among others, demethoxylation. Up to 15wt% monomeric aromatics of which 11wt% alkylphenols was obtained (on the lignin intake) with limited solid formation (<3wt% on lignin oil intake). For comparison, solid Kraft lignin was also directly hydrotreated for simultaneous depolymerisation and deoxygenation resulting in two times more alkylphenols. However, the alkylphenols concentration in the product oil is higher for the two-stage approach. Future research should compare direct hydrotreatment and the two-stage approach in more detail by techno-economic assessments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Elmendorf, Sarah C; Henry, Gregory H R; Hollister, Robert D; Fosaa, Anna Maria; Gould, William A; Hermanutz, Luise; Hofgaard, Annika; Jónsdóttir, Ingibjörg S; Jónsdóttir, Ingibjörg I; Jorgenson, Janet C; Lévesque, Esther; Magnusson, Borgþór; Molau, Ulf; Myers-Smith, Isla H; Oberbauer, Steven F; Rixen, Christian; Tweedie, Craig E; Walker, Marilyn D; Walker, Marilyn
2015-01-13
Inference about future climate change impacts typically relies on one of three approaches: manipulative experiments, historical comparisons (broadly defined to include monitoring the response to ambient climate fluctuations using repeat sampling of plots, dendroecology, and paleoecology techniques), and space-for-time substitutions derived from sampling along environmental gradients. Potential limitations of all three approaches are recognized. Here we address the congruence among these three main approaches by comparing the degree to which tundra plant community composition changes (i) in response to in situ experimental warming, (ii) with interannual variability in summer temperature within sites, and (iii) over spatial gradients in summer temperature. We analyzed changes in plant community composition from repeat sampling (85 plant communities in 28 regions) and experimental warming studies (28 experiments in 14 regions) throughout arctic and alpine North America and Europe. Increases in the relative abundance of species with a warmer thermal niche were observed in response to warmer summer temperatures using all three methods; however, effect sizes were greater over broad-scale spatial gradients relative to either temporal variability in summer temperature within a site or summer temperature increases induced by experimental warming. The effect sizes for change over time within a site and with experimental warming were nearly identical. These results support the view that inferences based on space-for-time substitution overestimate the magnitude of responses to contemporary climate warming, because spatial gradients reflect long-term processes. In contrast, in situ experimental warming and monitoring approaches yield consistent estimates of the magnitude of response of plant communities to climate warming.
High throughput wafer defect monitor for integrated metrology applications in photolithography
NASA Astrophysics Data System (ADS)
Rao, Nagaraja; Kinney, Patrick; Gupta, Anand
2008-03-01
The traditional approach to semiconductor wafer inspection is based on the use of stand-alone metrology tools, which while highly sensitive, are large, expensive and slow, requiring inspection to be performed off-line and on a lot sampling basis. Due to the long cycle times and sparse sampling, the current wafer inspection approach is not suited to rapid detection of process excursions that affect yield. The semiconductor industry is gradually moving towards deploying integrated metrology tools for real-time "monitoring" of product wafers during the manufacturing process. Integrated metrology aims to provide end-users with rapid feedback of problems during the manufacturing process, and the benefit of increased yield, and reduced rework and scrap. The approach of monitoring 100% of the wafers being processed requires some trade-off in sensitivity compared to traditional standalone metrology tools, but not by much. This paper describes a compact, low-cost wafer defect monitor suitable for integrated metrology applications and capable of detecting submicron defects on semiconductor wafers at an inspection rate of about 10 seconds per wafer (or 360 wafers per hour). The wafer monitor uses a whole wafer imaging approach to detect defects on both un-patterned and patterned wafers. Laboratory tests with a prototype system have demonstrated sensitivity down to 0.3 µm on un-patterned wafers and down to 1 µm on patterned wafers, at inspection rates of 10 seconds per wafer. An ideal application for this technology is preventing photolithography defects such as "hot spots" by implementing a wafer backside monitoring step prior to exposing wafers in the lithography step.
Fan, Mingsheng; Lal, Rattan; Cao, Jian; Qiao, Lei; Su, Yansen; Jiang, Rongfeng; Zhang, Fusuo
2013-01-01
China's food production has increased 6-fold during the past half-century, thanks to increased yields resulting from the management intensification, accomplished through greater inputs of fertilizer, water, new crop strains, and other Green Revolution's technologies. Yet, changes in underlying quality of soils and their effects on yield increase remain to be determined. Here, we provide a first attempt to quantify historical changes in inherent soil productivity and their contributions to the increase in yield. The assessment was conducted based on data-set derived from 7410 on-farm trials, 8 long-term experiments and an inventory of soil organic matter concentrations of arable land. Results show that even without organic and inorganic fertilizer addition crop yield from on-farm trials conducted in the 2000s was significantly higher compared with those in the 1980s - the increase ranged from 0.73 to 1.76 Mg/ha for China's major irrigated cereal-based cropping systems. The increase in on-farm yield in control plot since 1980s was due primarily to the enhancement of soil-related factors, and reflected inherent soil productivity improvement. The latter led to higher and stable yield with adoption of improved management practices, and contributed 43% to the increase in yield for wheat and 22% for maize in the north China, and, 31%, 35% and 22% for early and late rice in south China and for single rice crop in the Yangtze River Basin since 1980. Thus, without an improvement in inherent soil productivity, the 'Agricultural Miracle in China' would not have happened. A comprehensive strategy of inherent soil productivity improvement in China, accomplished through combining engineering-based measures with biological-approaches, may be an important lesson for the developing world. We propose that advancing food security in 21st century for both China and other parts of world will depend on continuously improving inherent soil productivity.
Fan, Mingsheng; Lal, Rattan; Cao, Jian; Qiao, Lei; Su, Yansen; Jiang, Rongfeng; Zhang, Fusuo
2013-01-01
Objective China’s food production has increased 6-fold during the past half-century, thanks to increased yields resulting from the management intensification, accomplished through greater inputs of fertilizer, water, new crop strains, and other Green Revolution’s technologies. Yet, changes in underlying quality of soils and their effects on yield increase remain to be determined. Here, we provide a first attempt to quantify historical changes in inherent soil productivity and their contributions to the increase in yield. Methods The assessment was conducted based on data-set derived from 7410 on-farm trials, 8 long-term experiments and an inventory of soil organic matter concentrations of arable land. Results Results show that even without organic and inorganic fertilizer addition crop yield from on-farm trials conducted in the 2000s was significantly higher compared with those in the 1980s — the increase ranged from 0.73 to 1.76 Mg/ha for China’s major irrigated cereal-based cropping systems. The increase in on-farm yield in control plot since 1980s was due primarily to the enhancement of soil-related factors, and reflected inherent soil productivity improvement. The latter led to higher and stable yield with adoption of improved management practices, and contributed 43% to the increase in yield for wheat and 22% for maize in the north China, and, 31%, 35% and 22% for early and late rice in south China and for single rice crop in the Yangtze River Basin since 1980. Conclusions Thus, without an improvement in inherent soil productivity, the ‘Agricultural Miracle in China’ would not have happened. A comprehensive strategy of inherent soil productivity improvement in China, accomplished through combining engineering-based measures with biological-approaches, may be an important lesson for the developing world. We propose that advancing food security in 21st century for both China and other parts of world will depend on continuously improving inherent soil productivity. PMID:24058605
Discrete square root filtering - A survey of current techniques.
NASA Technical Reports Server (NTRS)
Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.
1971-01-01
Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.
Experimental approach to measure thick target neutron yields induced by heavy ions for shielding
NASA Astrophysics Data System (ADS)
Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Brouillard, C.; Clerc, T.; Damoy, S.; Desmezières, V.; Dessay, E.; Dupuis, M.; Grinyer, G. F.; Grinyer, J.; Jacquot, B.; Ledoux, X.; Madeline, A.; Menard, N.; Michel, M.; Morel, V.; Porée, F.; Rannou, B.; Savalle, A.
2017-09-01
Double differential (angular and energy) neutron distributions were measured using an activation foil technique. Reactions were induced by impinging two low-energy heavy-ion beams accelerated with the GANIL CSS1 cyclotron: (36S (12 MeV/u) and 208Pb (6.25 MeV/u)) onto thick natCu targets. Results have been compared to Monte-Carlo calculations from two codes (PHITS and FLUKA) for the purpose of benchmarking radiation protection and shielding requirements. This comparison suggests a disagreement between calculations and experiment, particularly for high-energy neutrons.
Constitutive Modeling of Piezoelectric Polymer Composites
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Tom (Technical Monitor)
2003-01-01
A new modeling approach is proposed for predicting the bulk electromechanical properties of piezoelectric composites. The proposed model offers the same level of convenience as the well-known Mori-Tanaka method. In addition, it is shown to yield predicted properties that are, in most cases, more accurate or equally as accurate as the Mori-Tanaka scheme. In particular, the proposed method is used to determine the electromechanical properties of four piezoelectric polymer composite materials as a function of inclusion volume fraction. The predicted properties are compared to those calculated using the Mori-Tanaka and finite element methods.
Jet printing of convex and concave polymer micro-lenses.
Blattmann, M; Ocker, M; Zappe, H; Seifert, A
2015-09-21
We describe a novel approach for fabricating customized convex as well as concave micro-lenses using substrates with sophisticated pinning architecture and utilizing a drop-on-demand jet printer. The polymeric lens material deposited on the wafer is cured by UV light irradiation yielding lenses with high quality surfaces. Surface shape and roughness of the cured polymer lenses are characterized by white light interferometry. Their optical quality is demonstrated by imaging an USAF1951 test chart. The evaluated modulation transfer function is compared to Zemax simulations as a benchmark for the fabricated lenses.
Theoretical and experimental aspects of chaos control by time-delayed feedback.
Just, Wolfram; Benner, Hartmut; Reibold, Ekkehard
2003-03-01
We review recent developments for the control of chaos by time-delayed feedback methods. While such methods are easily applied even in quite complex experimental context the theoretical analysis yields infinite-dimensional differential-difference systems which are hard to tackle. The essential ideas for a general theoretical approach are sketched and the results are compared to electronic circuits and to high power ferromagnetic resonance experiments. Our results show that the control performance can be understood on the basis of experimentally accessible quantities without resort to any model for the internal dynamics.
Connected word recognition using a cascaded neuro-computational model
NASA Astrophysics Data System (ADS)
Hoya, Tetsuya; van Leeuwen, Cees
2016-10-01
We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.
Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain
2017-01-01
Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results. PMID:28467468
Rogiers, Bart; Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain
2017-01-01
Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.
Busse, Harald; Schmitgen, Arno; Trantakis, Christos; Schober, Ralf; Kahn, Thomas; Moche, Michael
2006-07-01
To present an advanced approach for intraoperative image guidance in an open 0.5 T MRI and to evaluate its effectiveness for neurosurgical interventions by comparison with a dynamic scan-guided localization technique. The built-in scan guidance mode relied on successive interactive MRI scans. The additional advanced mode provided real-time navigation based on reformatted high-quality, intraoperatively acquired MR reference data, allowed multimodal image fusion, and used the successive scans of the built-in mode for quick verification of the position only. Analysis involved tumor resections and biopsies in either scan guidance (N = 36) or advanced mode (N = 59) by the same three neurosurgeons. Technical, surgical, and workflow aspects were compared. The image quality and hand-eye coordination of the advanced approach were improved. While the average extent of resection, neurologic outcome after functional MRI (fMRI) integration, and diagnostic yield appeared to be slightly better under advanced guidance, particularly for the main surgeon, statistical analysis revealed no significant differences. Resection times were comparable, while biopsies took around 30 minutes longer. The presented approach is safe and provides more detailed images and higher navigation speed at the expense of actuality. The surgical outcome achieved with advanced guidance is (at least) as good as that obtained with dynamic scan guidance. (c) 2006 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Li, Chenyang; Verma, Prakash; Hannon, Kevin P.; Evangelista, Francesco A.
2017-08-01
We propose an economical state-specific approach to evaluate electronic excitation energies based on the driven similarity renormalization group truncated to second order (DSRG-PT2). Starting from a closed-shell Hartree-Fock wave function, a model space is constructed that includes all single or single and double excitations within a given set of active orbitals. The resulting VCIS-DSRG-PT2 and VCISD-DSRG-PT2 methods are introduced and benchmarked on a set of 28 organic molecules [M. Schreiber et al., J. Chem. Phys. 128, 134110 (2008)]. Taking CC3 results as reference values, mean absolute deviations of 0.32 and 0.22 eV are observed for VCIS-DSRG-PT2 and VCISD-DSRG-PT2 excitation energies, respectively. Overall, VCIS-DSRG-PT2 yields results with accuracy comparable to those from time-dependent density functional theory using the B3LYP functional, while VCISD-DSRG-PT2 gives excitation energies comparable to those from equation-of-motion coupled cluster with singles and doubles.
Organic agriculture in the twenty-first century.
Reganold, John P; Wachter, Jonathan M
2016-02-03
Organic agriculture has a history of being contentious and is considered by some as an inefficient approach to food production. Yet organic foods and beverages are a rapidly growing market segment in the global food industry. Here, we examine the performance of organic farming in light of four key sustainability metrics: productivity, environmental impact, economic viability and social wellbeing. Organic farming systems produce lower yields compared with conventional agriculture. However, they are more profitable and environmentally friendly, and deliver equally or more nutritious foods that contain less (or no) pesticide residues, compared with conventional farming. Moreover, initial evidence indicates that organic agricultural systems deliver greater ecosystem services and social benefits. Although organic agriculture has an untapped role to play when it comes to the establishment of sustainable farming systems, no single approach will safely feed the planet. Rather, a blend of organic and other innovative farming systems is needed. Significant barriers exist to adopting these systems, however, and a diversity of policy instruments will be required to facilitate their development and implementation.
Jandaik, Savita; Singh, Rajender; Sharma, Mamta
2013-01-01
The present study investigated the effects of four forestry byproducts (sawdust of oak, mango, khair, and tuni) and three agricultural residues (paddy straw, wheat straw, and soybean waste) along with four supplements (wheat bran, rice bran, corn flour, and gram powder) on growth characteristics (spawn run and primordial formation) and yield of Ganoderma lucidum. There were significant differences (P=0.05) in yield regardless of substrates and supplements used in experimentation. Among substrates, agriculture residues supported better yield and biological efficiency of G. lucidum compared to forestry byproducts irrespective of the supplements. The highest yield (82.5 g) and biological efficiency (27.5%) were recorded from paddy straw supplemented with wheat bran, which invariably resulted in significantly higher yield compared to the unsupplemented check(s) or other supplements used in this study.
Missing Data in Alcohol Clinical Trials with Binary Outcomes
Hallgren, Kevin A.; Witkiewitz, Katie; Kranzler, Henry R.; Falk, Daniel E.; Litten, Raye Z.; O’Malley, Stephanie S.; Anton, Raymond F.
2017-01-01
Background Missing data are common in alcohol clinical trials for both continuous and binary endpoints. Approaches to handle missing data have been explored for continuous outcomes, yet no studies have compared missing data approaches for binary outcomes (e.g., abstinence, no heavy drinking days). The present study compares approaches to modeling binary outcomes with missing data in the COMBINE study. Method We included participants in the COMBINE Study who had complete drinking data during treatment and who were assigned to active medication or placebo conditions (N=1146). Using simulation methods, missing data were introduced under common scenarios with varying sample sizes and amounts of missing data. Logistic regression was used to estimate the effect of naltrexone (vs. placebo) in predicting any drinking and any heavy drinking outcomes at the end of treatment using four analytic approaches: complete case analysis (CCA), last observation carried forward (LOCF), the worst-case scenario of missing equals any drinking or heavy drinking (WCS), and multiple imputation (MI). In separate analyses, these approaches were compared when drinking data were manually deleted for those participants who discontinued treatment but continued to provide drinking data. Results WCS produced the greatest amount of bias in treatment effect estimates. MI usually yielded less biased estimates than WCS and CCA in the simulated data, and performed considerably better than LOCF when estimating treatment effects among individuals who discontinued treatment. Conclusions Missing data can introduce bias in treatment effect estimates in alcohol clinical trials. Researchers should utilize modern missing data methods, including MI, and avoid WCS and CCA when analyzing binary alcohol clinical trial outcomes. PMID:27254113
Missing Data in Alcohol Clinical Trials with Binary Outcomes.
Hallgren, Kevin A; Witkiewitz, Katie; Kranzler, Henry R; Falk, Daniel E; Litten, Raye Z; O'Malley, Stephanie S; Anton, Raymond F
2016-07-01
Missing data are common in alcohol clinical trials for both continuous and binary end points. Approaches to handle missing data have been explored for continuous outcomes, yet no studies have compared missing data approaches for binary outcomes (e.g., abstinence, no heavy drinking days). This study compares approaches to modeling binary outcomes with missing data in the COMBINE study. We included participants in the COMBINE study who had complete drinking data during treatment and who were assigned to active medication or placebo conditions (N = 1,146). Using simulation methods, missing data were introduced under common scenarios with varying sample sizes and amounts of missing data. Logistic regression was used to estimate the effect of naltrexone (vs. placebo) in predicting any drinking and any heavy drinking outcomes at the end of treatment using 4 analytic approaches: complete case analysis (CCA), last observation carried forward (LOCF), the worst case scenario (WCS) of missing equals any drinking or heavy drinking, and multiple imputation (MI). In separate analyses, these approaches were compared when drinking data were manually deleted for those participants who discontinued treatment but continued to provide drinking data. WCS produced the greatest amount of bias in treatment effect estimates. MI usually yielded less biased estimates than WCS and CCA in the simulated data and performed considerably better than LOCF when estimating treatment effects among individuals who discontinued treatment. Missing data can introduce bias in treatment effect estimates in alcohol clinical trials. Researchers should utilize modern missing data methods, including MI, and avoid WCS and CCA when analyzing binary alcohol clinical trial outcomes. Copyright © 2016 by the Research Society on Alcoholism.
Seiler, Stefanie; Di Santo, Stefano; Sahli, Sebastian; Andereggen, Lukas; Widmer, Hans Rudolf
2017-08-01
Cell transplantation using ventral mesencephalic tissue is an experimental approach to treat Parkinson's disease. This approach is limited by poor survival of the transplants and the high number of dopaminergic neurons needed for grafting. Increasing the yield of dopaminergic neurons in donor tissue is of great importance. We have previously shown that antagonization of the Nogo-receptor 1 by NEP1-40 promoted survival of cultured dopaminergic neurons and exposure to neurotrophin-4/5 increased dopaminergic cell densities in organotypic midbrain cultures. We investigated whether a combination of both treatments offers a novel tool to further improve dopaminergic neuron survival. Rat embryonic ventral mesencephalic neurons grown as organotypic free-floating roller tube or primary dissociated cultures were exposed to neurotrophin-4/5 and NEP1-40. The combined and single factor treatment resulted in significantly higher numbers of tyrosine hydroxylase positive neurons compared to controls. Significantly stronger tyrosine hydroxylase signal intensity was detected by Western blotting in the combination-treated cultures compared to controls but not compared to single factor treatments. Neurotrophin-4/5 and the combined treatment showed significantly higher signals for the neuronal marker microtubule-associated protein 2 in Western blots compared to control while no effects were observed for the astroglial marker glial fibrillary acidic protein between groups, suggesting that neurotrophin-4/5 targets mainly neuronal cells. Finally, NEP1-40 and the combined treatment significantly augmented tyrosine hydroxylase positive neurite length. Summarizing, our findings substantiate that antagonization of the Nogo-receptor 1 promotes dopaminergic neurons but does not further increase the yield of dopaminergic neurons and their morphological complexity when combined with neurotrophin-4/5 hinting to the idea that these treatments might exert their effects by activating common downstream pathways. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, J; Hao, Z; Wang, H
2018-01-01
The human brain can be characterized as functional networks. Therefore, it is important to subdivide the brain appropriately in order to construct reliable networks. Resting-state functional connectivity-based parcellation is a commonly used technique to fulfill this goal. Here we propose a novel individual subject-level parcellation approach based on whole-brain resting-state functional magnetic resonance imaging (fMRI) data. We first used a supervoxel method known as simple linear iterative clustering directly on resting-state fMRI time series to generate supervoxels, and then combined similar supervoxels to generate clusters using a clustering method known as graph-without-cut (GWC). The GWC approach incorporates spatial information and multiple features of the supervoxels by energy minimization, simultaneously yielding an optimal graph and brain parcellation. Meanwhile, it theoretically guarantees that the actual cluster number is exactly equal to the initialized cluster number. By comparing the results of the GWC approach and those of the random GWC approach, we demonstrated that GWC does not rely heavily on spatial structures, thus avoiding the challenges encountered in some previous whole-brain parcellation approaches. In addition, by comparing the GWC approach to two competing approaches, we showed that GWC achieved better parcellation performances in terms of different evaluation metrics. The proposed approach can be used to generate individualized brain atlases for applications related to cognition, development, aging, disease, personalized medicine, etc. The major source codes of this study have been made publicly available at https://github.com/yuzhounh/GWC.
Holmes, William J; Darby, Richard AJ; Wilks, Martin DB; Smith, Rodney; Bill, Roslyn M
2009-01-01
Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time. PMID:19570229
Proceedings of the 1974 Lyndon B. Johnson Space Center Wheat-Yield Conference
NASA Technical Reports Server (NTRS)
Pitts, D. E.; Barger, G. L.
1975-01-01
The proceedings of the 1974 Lyndon B. Johnson Space Center Wheat-Yield Conference are presented. The state of art of wheat-yield forecasting and the feasibility of incorporating remote sensing into this forecasting were discussed with emphasis on formulating common approach to wheat-yield forecasting, primarily using conventional meteorological measurements, which can later include the various applications of remote sensing. Papers are presented which deal with developments in the field of crop modelling.
Ultrasound assisted intensification of biodiesel production using enzymatic interesterification.
Subhedar, Preeti B; Gogate, Parag R
2016-03-01
Ultrasound assisted intensification of synthesis of biodiesel from waste cooking oil using methyl acetate and immobilized lipase obtained from Thermomyces lanuginosus (Lipozyme TLIM) as a catalyst has been investigated in the present work. The reaction has also been investigated using the conventional approach based on stirring so as to establish the beneficial effects obtained due to the use of ultrasound. Effect of operating conditions such as reactant molar ratio (oil and methyl acetate), temperature and enzyme loading on the yield of biodiesel has been investigated. Optimum conditions for the conventional approach (without ultrasound) were established as reactant molar ratio of 1:12 (oil:methyl acetate), enzyme loading of 6% (w/v), temperature of 40 °C and reaction time of 24 h and under these conditions, 90.1% biodiesel yield was obtained. The optimum conditions for the ultrasound assisted approach were oil to methyl acetate molar ratio of 1:9, enzyme loading of 3% (w/v), and reaction time of 3 h and the biodiesel yield obtained under these conditions was 96.1%. Use of ultrasound resulted in significant reduction in the reaction time with higher yields and lower requirement of the enzyme loading. The obtained results have clearly established that ultrasound assisted interesterification was a fast and efficient approach for biodiesel production giving significant benefits, which can help in reducing the costs of production. Reusability studies for the enzyme were also performed but it was observed that reuse of the catalyst under the optimum experimental condition resulted in reduced enzyme activity and biodiesel yield. Copyright © 2015 Elsevier B.V. All rights reserved.
Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia
2016-02-01
To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chenu, Karine; Chapman, Scott C; Tardieu, François; McLean, Greg; Welcker, Claude; Hammer, Graeme L
2009-12-01
Under drought, substantial genotype-environment (G x E) interactions impede breeding progress for yield. Identifying genetic controls associated with yield response is confounded by poor genetic correlations across testing environments. Part of this problem is related to our inability to account for the interplay of genetic controls, physiological traits, and environmental conditions throughout the crop cycle. We propose a modeling approach to bridge this "gene-to-phenotype" gap. For maize under drought, we simulated the impact of quantitative trait loci (QTL) controlling two key processes (leaf and silk elongation) that influence crop growth, water use, and grain yield. Substantial G x E interaction for yield was simulated for hypothetical recombinant inbred lines (RILs) across different seasonal patterns of drought. QTL that accelerated leaf elongation caused an increase in crop leaf area and yield in well-watered or preflowering water deficit conditions, but a reduction in yield under terminal stresses (as such "leafy" genotypes prematurely exhausted the water supply). The QTL impact on yield was substantially enhanced by including pleiotropic effects of these QTL on silk elongation and on consequent grain set. The simulations obtained illustrated the difficulty of interpreting the genetic control of yield for genotypes influenced only by the additive effects of QTL associated with leaf and silk growth. The results highlight the potential of integrative simulation modeling for gene-to-phenotype prediction and for exploiting G x E interactions for complex traits such as drought tolerance.
A Statistical Approach for Testing Cross-Phenotype Effects of Rare Variants
Broadaway, K. Alaine; Cutler, David J.; Duncan, Richard; Moore, Jacob L.; Ware, Erin B.; Jhun, Min A.; Bielak, Lawrence F.; Zhao, Wei; Smith, Jennifer A.; Peyser, Patricia A.; Kardia, Sharon L.R.; Ghosh, Debashis; Epstein, Michael P.
2016-01-01
Increasing empirical evidence suggests that many genetic variants influence multiple distinct phenotypes. When cross-phenotype effects exist, multivariate association methods that consider pleiotropy are often more powerful than univariate methods that model each phenotype separately. Although several statistical approaches exist for testing cross-phenotype effects for common variants, there is a lack of similar tests for gene-based analysis of rare variants. In order to fill this important gap, we introduce a statistical method for cross-phenotype analysis of rare variants using a nonparametric distance-covariance approach that compares similarity in multivariate phenotypes to similarity in rare-variant genotypes across a gene. The approach can accommodate both binary and continuous phenotypes and further can adjust for covariates. Our approach yields a closed-form test whose significance can be evaluated analytically, thereby improving computational efficiency and permitting application on a genome-wide scale. We use simulated data to demonstrate that our method, which we refer to as the Gene Association with Multiple Traits (GAMuT) test, provides increased power over competing approaches. We also illustrate our approach using exome-chip data from the Genetic Epidemiology Network of Arteriopathy. PMID:26942286
Martin, Alex D; Siamaki, Ali R; Belecki, Katherine; Gupton, B Frank
2015-02-06
A direct and efficient total synthesis has been developed for telmisartan, a widely prescribed treatment for hypertension. This approach brings together two functionalized benzimidazoles using a high-yielding Suzuki reaction that can be catalyzed by either a homogeneous palladium source or graphene-supported palladium nanoparticles. The ability to perform the cross-coupling reaction was facilitated by the regio-controlled preparation of the 2-bromo-1-methylbenzimidazole precursor. This convergent approach provides telmisartan in an overall yield of 72% while circumventing many issues associated with previously reported processes.
Edelson, Dana P.; Eilevstjønn, Joar; Weidman, Elizabeth K.; Retzer, Elizabeth; Vanden Hoek, Terry L.; Abella, Benjamin S.
2009-01-01
Objective Hyperventilation is both common and detrimental during cardiopulmonary resuscitation (CPR). Chest wall impedance algorithms have been developed to detect ventilations during CPR. However, impedance signals are challenged by noise artifact from multiple sources, including chest compressions. Capnography has been proposed as an alternate method to measure ventilations. We sought to assess and compare the adequacy of these two approaches. Methods Continuous chest wall impedance and capnography were recorded during consecutive in-hospital cardiac arrests. Algorithms utilizing each of these data sources were compared to a manually determined “gold standard” reference ventilation rate. In addition, a combination algorithm, which utilized the highest of the impedance or capnography values in any given minute, was similarly evaluated. Results Data were collected from 37 cardiac arrests, yielding 438 min of data with continuous chest compressions and concurrent recording of impedance and capnography. The manually calculated mean ventilation rate was 13.3±4.3/min. In comparison, the defibrillator’s impedance-based algorithm yielded an average rate of 11.3±4.4/min (p=0.0001) while the capnography rate was 11.7±3.7/min (p=0.0009). There was no significant difference in sensitivity and positive predictive value between the two methods. The combination algorithm rate was 12.4±3.5/min (p=0.02), which yielded the highest fraction of minutes with respiratory rates within 2/min of the reference. The impedance signal was uninterpretable 19.5% of the time, compared with 9.7% for capnography. However, the signals were only simultaneously non-interpretable 0.8% of the time. Conclusions Both the impedance and capnography-based algorithms underestimated the ventilation rate. Reliable ventilation rate determination may require a novel combination of multiple algorithms during resuscitation. PMID:20036047
Cianchetta, Stefano; Bregoli, Luca; Galletti, Stefania
2017-11-01
Giant reed, miscanthus, and switchgrass are considered prominent lignocellulosic feedstocks to obtain fermentable sugars for biofuel production. The bioconversion into sugars requires a delignifying pre-treatment step followed by hydrolysis with cellulase and other accessory enzymes like xylanase, especially in the case of alkali pre-treatments, which retain the hemicellulose fraction. Blends richer in accessory enzymes than commercial mix can be obtained growing fungi on feedstock-based substrates, thus ten selected Trichoderma isolates, including the hypercellulolytic strain Trichoderma reesei Rut-C30, were grown on giant reed, miscanthus, or switchgrass-based substrates. The produced enzymes were used to saccharify the corresponding feedstocks, compared to a commercial enzymatic mix (6 FPU/g). Feedstocks were acid (H 2 SO 4 0.2-2%, w/v) or alkali (NaOH 0.02-0.2%, w/v) pre-treated. A microplate-based approach was chosen for most of the experimental steps due to the large number of samples. The highest bioconversion was generally obtained with Trichoderma harzianum Or4/99 enzymes (78, 89, and 94% final sugar yields at 48 h for giant reed, miscanthus, and switchgrass, respectively), with significant increases compared to the commercial mix, especially with alkaline pre-treatments. The differences in bioconversion yields were only partially caused by xylanases (maximum R 2 = 0.5), indicating a role for other accessory enzymes.
Soil Moisture as an Estimator for Crop Yield in Germany
NASA Astrophysics Data System (ADS)
Peichl, Michael; Meyer, Volker; Samaniego, Luis; Thober, Stephan
2015-04-01
Annual crop yield depends on various factors such as soil properties, management decisions, and meteorological conditions. Unfavorable weather conditions, e.g. droughts, have the potential to drastically diminish crop yield in rain-fed agriculture. For example, the drought in 2003 caused direct losses of 1.5 billion EUR only in Germany. Predicting crop yields allows to mitigate negative effects of weather extremes which are assumed to occur more often in the future due to climate change. A standard approach in economics is to predict the impact of climate change on agriculture as a function of temperature and precipitation. This approach has been developed further using concepts like growing degree days. Other econometric models use nonlinear functions of heat or vapor pressure deficit. However, none of these approaches uses soil moisture to predict crop yield. We hypothesize that soil moisture is a better indicator to explain stress on plant growth than estimations based on precipitation and temperature. This is the case because the latter variables do not explicitly account for the available water content in the root zone, which is the primary source of water supply for plant growth. In this study, a reduced form panel approach is applied to estimate a multivariate econometric production function for the years 1999 to 2010. Annual crop yield data of various crops on the administrative district level serve as depending variables. The explanatory variable of major interest is the Soil Moisture Index (SMI), which quantifies anomalies in root zone soil moisture. The SMI is computed by the mesoscale Hydrological Model (mHM, www.ufz.de/mhm). The index represents the monthly soil water quantile at a 4 km2 grid resolution covering entire Germany. A reduced model approach is suitable because the SMI is the result of a stochastic weather process and therefore can be considered exogenous. For the ease of interpretation a linear functionality is preferred. Meteorological, phenological, geological, agronomic, and socio-economic variables are also considered to extend the model in order to reveal the proper causal relation. First results show that dry as well as wet extremes of SMI have a negative impact on crop yield for winter wheat. This indicates that soil moisture has at least a limiting affect on crop production.
Validation of the Unthinned Loblolly Pine Plantation Yield Model-USLYCOWG
V. Clark Baldwin; D.P. Feduccia
1982-01-01
Yield and stand structure predictions from an unthinned loblolly pine plantation yield prediction system (USLYCOWG computer program) were compared with observations from 80 unthinned loblolly pine plots. Overall, the predicted estimates were reasonable when compared to observed values, but predictions based on input data at or near the system's limits may be in...
Thomas-Porch, Caasy; Li, Jie; Zanata, Fabiana; Martin, Elizabeth C; Pashos, Nicholas; Genemaras, Kaylynn; Poche, J Nicholas; Totaro, Nicholas P; Bratton, Melyssa R; Gaupp, Dina; Frazier, Trivia; Wu, Xiying; Ferreira, Lydia Masako; Tian, Weidong; Wang, Guangdi; Bunnell, Bruce A; Flynn, Lauren; Hayes, Daniel; Gimble, Jeffrey M
2018-04-25
Decellularized human adipose tissue has potential clinical utility as a processed biological scaffold for soft tissue cosmesis, grafting and reconstruction. Adipose tissue decellularization has been accomplished using enzymatic-, detergent-, and/or solvent-based methods. To examine the hypothesis that distinct decellularization processes may yield scaffolds with differing compositions, the current study employed mass spectrometry to compare the proteomes of human adipose-derived matrices generated through three independent methods combining enzymatic-, detergent-, and/or solvent-based steps. In addition to protein content, bioscaffolds were evaluated for DNA depletion, ECM composition, and physical structure using optical density, histochemical staining, and scanning electron microscopy (SEM). Mass spectrometry (MS) based proteomic analyses identified 25 proteins (having at least two peptide sequences detected) in the scaffolds generated with an enzymatic approach, 143 with the detergent approach, and 102 with the solvent approach, as compared to 155 detected in unprocessed native human fat. Immunohistochemical detection confirmed the presence of the structural proteins actin, collagen type VI, fibrillin, laminin, and vimentin. Subsequent in vivo analysis of the predominantly enzymatic- and detergent-based decellularized scaffolds following subcutaneous implantation in GFP + transgenic mice demonstrated that the matrices generated with both approaches supported the ingrowth of host-derived adipocyte progenitors and vasculature in a time dependent manner. Together, these results determine that decellularization methods influence the protein composition of adipose tissue-derived bioscaffolds. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
A Dynamic Resilience Approach for WDM Optical Networks
NASA Astrophysics Data System (ADS)
Garg, Amit Kumar
2017-12-01
Optical fibres have been developed as a transmission medium to carry traffic in order to provide various services in telecommunications platform. Failure of this fibre caused loss of data which can interrupt communication services. This paper has been focused only on survivable schemes in order to guarantee both protection and restoration in WDM optical networks. In this paper, a dynamic resilience approach has been proposed whose objective is to route the flows in a way which minimizes the total amount of bandwidth used for working and protection paths. In the proposed approach, path-based protection is utilized because it yields lower overhead and is also suitable for global optimization where, in case of a single link failure, all the flows utilizing the failed link are re-routed to a pre-computed set of paths. The simulation results demonstrate that proposed approach is much more efficient as it provides better quality of services (QoS) in terms of network resource utilization, blocking probability etc. as compared to conventional protection and restoration schemes. The proposed approach seems to offer an attractive combination of features, with both ring like speed and mesh-like efficiency.
NASA Astrophysics Data System (ADS)
Serrat-Capdevila, A.; Valdes, J. B.
2005-12-01
An optimization approach for the operation of international multi-reservoir systems is presented. The approach uses Stochastic Dynamic Programming (SDP) algorithms, both steady-state and real-time, to develop two models. In the first model, the reservoirs and flows of the system are aggregated to yield an equivalent reservoir, and the obtained operating policies are disaggregated using a non-linear optimization procedure for each reservoir and for each nation water balance. In the second model a multi-reservoir approach is applied, disaggregating the releases for each country water share in each reservoir. The non-linear disaggregation algorithm uses SDP-derived operating policies as boundary conditions for a local time-step optimization. Finally, the performance of the different approaches and methods is compared. These models are applied to the Amistad-Falcon International Reservoir System as part of a binational dynamic modeling effort to develop a decision support system tool for a better management of the water resources in the Lower Rio Grande Basin, currently enduring a severe drought.
Imaging genetics approach to predict progression of Parkinson's diseases.
Mansu Kim; Seong-Jin Son; Hyunjin Park
2017-07-01
Imaging genetics is a tool to extract genetic variants associated with both clinical phenotypes and imaging information. The approach can extract additional genetic variants compared to conventional approaches to better investigate various diseased conditions. Here, we applied imaging genetics to study Parkinson's disease (PD). We aimed to extract significant features derived from imaging genetics and neuroimaging. We built a regression model based on extracted significant features combining genetics and neuroimaging to better predict clinical scores of PD progression (i.e. MDS-UPDRS). Our model yielded high correlation (r = 0.697, p <; 0.001) and low root mean squared error (8.36) between predicted and actual MDS-UPDRS scores. Neuroimaging (from 123 I-Ioflupane SPECT) predictors of regression model were computed from independent component analysis approach. Genetic features were computed using image genetics approach based on identified neuroimaging features as intermediate phenotypes. Joint modeling of neuroimaging and genetics could provide complementary information and thus have the potential to provide further insight into the pathophysiology of PD. Our model included newly found neuroimaging features and genetic variants which need further investigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzke, Melissa M.; Brown, Joseph N.; Gritsenko, Marina A.
2013-02-01
Liquid chromatography coupled with mass spectrometry (LC-MS) is widely used to identify and quantify peptides in complex biological samples. In particular, label-free shotgun proteomics is highly effective for the identification of peptides and subsequently obtaining a global protein profile of a sample. As a result, this approach is widely used for discovery studies. Typically, the objective of these discovery studies is to identify proteins that are affected by some condition of interest (e.g. disease, exposure). However, for complex biological samples, label-free LC-MS proteomics experiments measure peptides and do not directly yield protein quantities. Thus, protein quantification must be inferred frommore » one or more measured peptides. In recent years, many computational approaches to relative protein quantification of label-free LC-MS data have been published. In this review, we examine the most commonly employed quantification approaches to relative protein abundance from peak intensity values, evaluate their individual merits, and discuss challenges in the use of the various computational approaches.« less
Decomposition of heterogeneous organic matterand its long-term stabilization in soils
Sierra, Carlos A.; Harmon, Mark E.; Perakis, Steven S.
2011-01-01
Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. Decomposition models represent this heterogeneity either as a set of discrete pools with different residence times or as a continuum of qualities. It is unclear though, whether these two different approaches yield comparable predictions of organic matter dynamics. Here, we compare predictions from these two different approaches and propose an intermediate approach to study organic matter decomposition based on concepts from continuous models implemented numerically. We found that the disagreement between discrete and continuous approaches can be considerable depending on the degree of nonlinearity of the model and simulation time. The two approaches can diverge substantially for predicting long-term processes in soils. Based on our alternative approach, which is a modification of the continuous quality theory, we explored the temporal patterns that emerge by treating substrate heterogeneity explicitly. The analysis suggests that the pattern of carbon mineralization over time is highly dependent on the degree and form of nonlinearity in the model, mostly expressed as differences in microbial growth and efficiency for different substrates. Moreover, short-term stabilization and destabilization mechanisms operating simultaneously result in long-term accumulation of carbon characterized by low decomposition rates, independent of the characteristics of the incoming litter. We show that representation of heterogeneity in the decomposition process can lead to substantial improvements in our understanding of carbon mineralization and its long-term stability in soils.
Krawitz, Peter M; Schiska, Daniela; Krüger, Ulrike; Appelt, Sandra; Heinrich, Verena; Parkhomchuk, Dmitri; Timmermann, Bernd; Millan, Jose M; Robinson, Peter N; Mundlos, Stefan; Hecht, Jochen; Gross, Manfred
2014-01-01
Usher syndrome is an autosomal recessive disorder characterized both by deafness and blindness. For the three clinical subtypes of Usher syndrome causal mutations in altogether 12 genes and a modifier gene have been identified. Due to the genetic heterogeneity of Usher syndrome, the molecular analysis is predestined for a comprehensive and parallelized analysis of all known genes by next-generation sequencing (NGS) approaches. We describe here the targeted enrichment and deep sequencing for exons of Usher genes and compare the costs and workload of this approach compared to Sanger sequencing. We also present a bioinformatics analysis pipeline that allows us to detect single-nucleotide variants, short insertions and deletions, as well as copy number variations of one or more exons on the same sequence data. Additionally, we present a flexible in silico gene panel for the analysis of sequence variants, in which newly identified genes can easily be included. We applied this approach to a cohort of 44 Usher patients and detected biallelic pathogenic mutations in 35 individuals and monoallelic mutations in eight individuals of our cohort. Thirty-nine of the sequence variants, including two heterozygous deletions comprising several exons of USH2A, have not been reported so far. Our NGS-based approach allowed us to assess single-nucleotide variants, small indels, and whole exon deletions in a single test. The described diagnostic approach is fast and cost-effective with a high molecular diagnostic yield. PMID:25333064
Krawitz, Peter M; Schiska, Daniela; Krüger, Ulrike; Appelt, Sandra; Heinrich, Verena; Parkhomchuk, Dmitri; Timmermann, Bernd; Millan, Jose M; Robinson, Peter N; Mundlos, Stefan; Hecht, Jochen; Gross, Manfred
2014-09-01
Usher syndrome is an autosomal recessive disorder characterized both by deafness and blindness. For the three clinical subtypes of Usher syndrome causal mutations in altogether 12 genes and a modifier gene have been identified. Due to the genetic heterogeneity of Usher syndrome, the molecular analysis is predestined for a comprehensive and parallelized analysis of all known genes by next-generation sequencing (NGS) approaches. We describe here the targeted enrichment and deep sequencing for exons of Usher genes and compare the costs and workload of this approach compared to Sanger sequencing. We also present a bioinformatics analysis pipeline that allows us to detect single-nucleotide variants, short insertions and deletions, as well as copy number variations of one or more exons on the same sequence data. Additionally, we present a flexible in silico gene panel for the analysis of sequence variants, in which newly identified genes can easily be included. We applied this approach to a cohort of 44 Usher patients and detected biallelic pathogenic mutations in 35 individuals and monoallelic mutations in eight individuals of our cohort. Thirty-nine of the sequence variants, including two heterozygous deletions comprising several exons of USH2A, have not been reported so far. Our NGS-based approach allowed us to assess single-nucleotide variants, small indels, and whole exon deletions in a single test. The described diagnostic approach is fast and cost-effective with a high molecular diagnostic yield.
Culture-Independent Analysis of Probiotic Products by Denaturing Gradient Gel Electrophoresis
Temmerman, R.; Scheirlinck, I.; Huys, G.; Swings, J.
2003-01-01
In order to obtain functional and safe probiotic products for human consumption, fast and reliable quality control of these products is crucial. Currently, analysis of most probiotics is still based on culture-dependent methods involving the use of specific isolation media and identification of a limited number of isolates, which makes this approach relatively insensitive, laborious, and time-consuming. In this study, a collection of 10 probiotic products, including four dairy products, one fruit drink, and five freeze-dried products, were subjected to microbial analysis by using a culture-independent approach, and the results were compared with the results of a conventional culture-dependent analysis. The culture-independent approach involved extraction of total bacterial DNA directly from the product, PCR amplification of the V3 region of the 16S ribosomal DNA, and separation of the amplicons on a denaturing gradient gel. Digital capturing and processing of denaturing gradient gel electrophoresis (DGGE) band patterns allowed direct identification of the amplicons at the species level. This whole culture-independent approach can be performed in less than 30 h. Compared with culture-dependent analysis, the DGGE approach was found to have a much higher sensitivity for detection of microbial strains in probiotic products in a fast, reliable, and reproducible manner. Unfortunately, as reported in previous studies in which the culture-dependent approach was used, a rather high percentage of probiotic products suffered from incorrect labeling and yielded low bacterial counts, which may decrease their probiotic potential. PMID:12513998
Assefa S. Desta
2006-01-01
A stochastic precipitation-runoff modeling is used to estimate a cold and warm-seasons water yield from a ponderosa pine forested watershed in the north-central Arizona. The model consists of two parts namely, simulation of the temporal and spatial distribution of precipitation using a stochastic, event-based approach and estimation of water yield from the watershed...
Accelerating yield ramp through design and manufacturing collaboration
NASA Astrophysics Data System (ADS)
Sarma, Robin C.; Dai, Huixiong; Smayling, Michael C.; Duane, Michael P.
2004-12-01
Ramping an integrated circuit from first silicon bring-up to production yield levels is a challenge for all semiconductor products on the path to profitable market entry. Two approaches to accelerating yield ramp are presented. The first is the use of laser mask writers for fast throughput, high yield, and cost effective pattern transfer. The second is the use of electrical test to find a defect and identify the physical region to probe in failure analysis that is most likely to uncover the root cause. This provides feedback to the design team on modifications to make to the design to avoid the yield issue in a future tape-out revision. Additionally, the process parameter responsible for the root cause of the defect is forward annotated through the design, mask and wafer coordinate systems so it can be monitored in-line on subsequent lots of the manufacturing run. This results in an improved recipe for the manufacturing equipment to potentially prevent the recurrence of the defect and raise yield levels on the following material. The test diagnostics approach is enabled by the seamless traceability of a feature across the design, photomask and wafer, made possible by a common data model for design, mask pattern generation and wafer fabrication.
Engineering the lodging resistance mechanism of post-Green Revolution rice to meet future demands.
Hirano, Ko; Ordonio, Reynante Lacsamana; Matsuoka, Makoto
2017-01-01
Traditional breeding for high-yielding rice has been dependent on the widespread cultivation of gibberellin (GA)-deficient semi-dwarf varieties. Dwarfism lowers the "center of gravity" of the plant body, which increases resistance against lodging and enables plants to support high grain yield. Although this approach was successful in latter half of the 20th century in rice and wheat breeding, this may no longer be enough to sustain rice with even higher yields. This is because relying solely on the semi-dwarf trait is subject to certain limitations, making it necessary to use other important traits to reinforce it. In this review, we present an alternative approach to increase lodging resistance by improving the quality of the culm by identifying genes related to culm quality and introducing these genes into high-yielding rice cultivars through molecular breeding technique.
Murovec, Boštjan; Kolbl, Sabina; Stres, Blaž
2015-01-01
The aim of this study was to develop and validate a community supported online infrastructure and bioresource for methane yield data and accompanying metadata collected from published literature. In total, 1164 entries described by 15,749 data points were assembled. Analysis of data collection showed little congruence in reporting of methodological approaches. The largest identifiable source of variation in reported methane yields was represented by authorship (i.e. substrate batches within particular substrate class) within which experimental scales (volumes (0.02-5l), incubation temperature (34-40 °C) and % VS of substrate played an important role (p < 0.05, npermutations = 999) as well. The largest fraction of variability, however, remained unaccounted for and thus unexplained (> 63%). This calls for reconsideration of accepted approaches to reporting data in currently published literature to increase capacity to service industrial decision making to a greater extent. Copyright © 2015 Elsevier Ltd. All rights reserved.
A hybrid framework for assessing maize drought vulnerability in Sub-Saharan Africa
NASA Astrophysics Data System (ADS)
Kamali, B.; Abbaspour, K. C.; Wehrli, B.; Yang, H.
2017-12-01
Drought has devastating impacts on crop yields. Quantifying drought vulnerability is the first step to better design of mitigation policies. The vulnerability of crop yield to drought has been assessed with different methods, however they lack a standardized base to measure its components and a procedure that facilitates spatial and temporal comparisons. This study attempts to quantify maize drought vulnerability through linking the Drought Exposure Index (DEI) to the Crop Failure Index (CFI). DEI and CFI were defined by fitting probability distribution functions to precipitation and maize yield respectively. To acquire crop drought vulnerability index (CDVI), DEI and CFI were combined in a hybrid framework which classifies CDVI with the same base as DEI and CFI. The analysis were implemented on Sub-Saharan African countries using maize yield simulated with the Environmental Policy Integrated Climate (EPIC) model at 0.5° resolution. The model was coupled with the Sequential Uncertainty Fitting algorithm for calibration at country level. Our results show that Central Africa and those Western African countries located below the Sahelian strip receive higher amount of precipitation, but experience high crop failure. Therefore, they are identified as more vulnerable regions compared to countries such as South Africa, Tanzania, and Kenya. We concluded that our hybrid approach complements information on crop drought vulnerability quantification and can be applied to different regions and scales.
NASA Astrophysics Data System (ADS)
Vico, Giulia; Porporato, Amilcare
2014-05-01
The field of ecohydrology, traditionally focusing on natural ecosystems, can offer the necessary quantitative tools to assess and compare the sustainability of agriculture across climates, soil types, crops, and irrigation strategies, including rainfall unpredictability. In particular, irrigation is one of the main strategies to enhance and stabilize agricultural productivity, but represents a cost in terms of often scarce water resources. Here, the sustainability of irrigated and rainfed agriculture is assessed by means of water productivity (defined as the ratio between yield and total supplied water), yields, water requirements, and their variability. These indicators are quantified using a probabilistic description of the soil water balance and crop development. Employing this framework, we interpret changes in water productivity as total water input is altered, in two staple crops (maize and wheat) grown under different soils, climates, and irrigation strategies. Climate change scenarios are explored by using the same approach and altering the rainfall statistics. For a given irrigation strategy, intermediate rainfall inputs leads to the highest variability in yield and irrigation water requirement - it is under these conditions that water management is most problematic. When considering the contrasting needs of limiting water requirements while ensuring adequate yields, micro-irrigation emerges as the most sustainable strategy at the field level, although consideration should be given to its profitability and long-term environmental implications.
Ho, Dora Y; Lin, Margaret; Schaenman, Joanna; Rosso, Fernando; Leung, Ann N C; Coutre, Steven E; Sista, Ramachandra R; Montoya, Jose G
2011-01-01
Haematological patients with neutropenic fever are frequently evaluated with chest computed tomography (CT) to rule out invasive fungal infections (IFI). We retrospectively analysed data from 100 consecutive patients with neutropenic fever and abnormal chest CT from 1998 to 2005 to evaluate their chest CT findings and the yield of diagnostic approaches employed. For their initial CTs, 79% had nodular opacities, with 24.1% associated with the halo sign. Other common CT abnormalities included pleural effusions (48%), ground glass opacities (37%) and consolidation (31%). The CT findings led to a change in antifungal therapy in 54% of the patients. Fifty-six patients received diagnostic procedures, including 46 bronchoscopies, 25 lung biopsies and seven sinus biopsies, with a diagnostic yield for IFI of 12.8%, 35.0% and 83.3%, respectively. In conclusion, chest CT plays an important role in the evaluation of haematological patients with febrile neutropenia and often leads to a change in antimicrobial therapy. Pulmonary nodules are the most common radiological abnormality. Sinus or lung biopsies have a high-diagnostic yield for IFI as compared to bronchoscopy. Patients with IFI may not have sinus/chest symptoms, and thus, clinicians should have a low threshold for performing sinus/chest imaging, and if indicated and safe, a biopsy of the abnormal areas. © 2009 Blackwell Verlag GmbH.
Feasibility of Multimodal Deformable Registration for Head and Neck Tumor Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortunati, Valerio, E-mail: v.fortunati@erasmusmc.nl; Verhaart, René F.; Angeloni, Francesco
2014-09-01
Purpose: To investigate the feasibility of using deformable registration in clinical practice to fuse MR and CT images of the head and neck for treatment planning. Method and Materials: A state-of-the-art deformable registration algorithm was optimized, evaluated, and compared with rigid registration. The evaluation was based on manually annotated anatomic landmarks and regions of interest in both modalities. We also developed a multiparametric registration approach, which simultaneously aligns T1- and T2-weighted MR sequences to CT. This was evaluated and compared with single-parametric approaches. Results: Our results show that deformable registration yielded a better accuracy than rigid registration, without introducing unrealisticmore » deformations. For deformable registration, an average landmark alignment of approximatively 1.7 mm was obtained. For all the regions of interest excluding the cerebellum and the parotids, deformable registration provided a median modified Hausdorff distance of approximatively 1 mm. Similar accuracies were obtained for the single-parameter and multiparameter approaches. Conclusions: This study demonstrates that deformable registration of head-and-neck CT and MR images is feasible, with overall a significanlty higher accuracy than for rigid registration.« less
NASA Astrophysics Data System (ADS)
Pousthomis, M.; Garnero, C.; Marcelot, C. G.; Blon, T.; Cayez, S.; Cassignol, C.; Du, V. A.; Krispin, M.; Arenal, R.; Soulantica, K.; Viau, G.; Lacroix, L.-M.
2017-02-01
Nanostructured magnets benefiting from efficient exchange-coupling between hard and soft grains represent an appealing approach for integrated miniaturized magnetic power sources. Using a bottom-up approach, nanostructured materials were prepared from binary assemblies of bcc FeCo and fcc FePt nanoparticles and compared with pure L10-FePt materials. The use of a bifunctional mercapto benzoic acid yields homogeneous assemblies of the two types of particles while reducing the organic matter amount. The 650 °C thermal annealing, mandatory to allow the L10-FePt phase transition, led to an important interdiffusion and thus decreased drastically the amount of soft phase present in the final composites. The analysis of recoil curves however evidenced the presence of an efficient interphase exchange coupling, which allows obtaining better magnetic performances than pure L10 FePt materials, energy product above 100 kJ m-3 being estimated for a Pt content of only 33%. These results clearly evidenced the interest of chemically grown nanoparticles for the preparation of performant spring-magnets, opening promising perspective for integrated subcentimetric magnets with optimized properties.
Adaptive coded aperture imaging in the infrared: towards a practical implementation
NASA Astrophysics Data System (ADS)
Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley
2008-08-01
An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian T.; Scherzinger, William M.
2017-01-19
A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
NASA Astrophysics Data System (ADS)
Pietropolli Charmet, Andrea; Cornaton, Yann
2018-05-01
This work presents an investigation of the theoretical predictions yielded by anharmonic force fields having the cubic and quartic force constants are computed analytically by means of density functional theory (DFT) using the recursive scheme developed by M. Ringholm et al. (J. Comput. Chem. 35 (2014) 622). Different functionals (namely B3LYP, PBE, PBE0 and PW86x) and basis sets were used for calculating the anharmonic vibrational spectra of two halomethanes. The benchmark analysis carried out demonstrates the reliability and overall good performances offered by hybrid approaches, where the harmonic data obtained at the coupled cluster with single and double excitations level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T), are combined with the fully analytic higher order force constants yielded by DFT functionals. These methods lead to reliable and computationally affordable calculations of anharmonic vibrational spectra with an accuracy comparable to that yielded by hybrid force fields having the anharmonic force fields computed at second order Møller-Plesset perturbation theory (MP2) level of theory using numerical differentiation but without the corresponding potential issues related to computational costs and numerical errors.
Contribution of indirect effects to clustered damage in DNA irradiated with protons.
Pachnerová Brabcová, K; Štěpán, V; Karamitros, M; Karabín, M; Dostálek, P; Incerti, S; Davídková, M; Sihver, L
2015-09-01
Protons are the dominant particles both in galactic cosmic rays and in solar particle events and, furthermore, proton irradiation becomes increasingly used in tumour treatment. It is believed that complex DNA damage is the determining factor for the consequent cellular response to radiation. DNA plasmid pBR322 was irradiated at U120-M cyclotron with 30 MeV protons and treated with two Escherichia coli base excision repair enzymes. The yields of SSBs and DSBs were analysed using agarose gel electrophoresis. DNA has been irradiated in the presence of hydroxyl radical scavenger (coumarin-3-carboxylic acid) in order to distinguish between direct and indirect damage of the biological target. Pure scavenger solution was used as a probe for measurement of induced OH· radical yields. Experimental OH· radical yield kinetics was compared with predictions computed by two theoretical models-RADAMOL and Geant4-DNA. Both approaches use Geant4-DNA for description of physical stages of radiation action, and then each of them applies a distinct model for description of the pre-chemical and chemical stage. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Siddhu, Muhammad Abdul Hanan; Li, Jianghao; Zhang, Jiafu; Huang, Yan; Wang, Wen; Chen, Chang; Liu, Guangqing
2016-01-01
Effective alteration of the recalcitrance properties like crystallization of cellulose, lignin shield, and interlinking of lignocellulosic biomass is an ideal way to utilize the full-scale potential for biofuel production. This study exhibited three different pretreatment effects to enhance the digestibility of corn stover (CS) for methane production. In this context, steam explosion (SE) and thermal potassium hydroxide (KOH-60°C) treated CS produced the maximal methane yield of 217.5 and 243.1 mL/gvs, which were 40.0% and 56.4% more than untreated CS (155.4 mL/gvs), respectively. Copretreatment of thermal potassium hydroxide and steam explosion (CPTPS) treated CS was highly significant among all treatments and improved 88.46% (292.9 mL/gvs) methane yield compared with untreated CS. Besides, CPTPS also achieved the highest biodegradability up to 68.90%. Three kinetic models very well simulated dynamics of methane production yield. Moreover, scanning electron microscopy (SEM), Fourier transform infrared (FTIR), and X-ray diffraction (XRD) analyses declared the most effective changes in physicochemical properties after CPTPS pretreatment. Thus, CPTPS might be a promising approach to deconstructing the recalcitrance of lignocellulosic structure to improve the biodegradability for AD. PMID:27200370
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
Okuno, Ayako; Hirano, Ko; Asano, Kenji; Takase, Wakana; Masuda, Reiko; Morinaka, Yoichi; Ueguchi-Tanaka, Miyako; Kitano, Hidemi; Matsuoka, Makoto
2014-01-01
Traditional breeding for high-yielding rice has been dependent on the widespread use of fertilizers and the cultivation of gibberellin (GA)-deficient semi-dwarf varieties. The use of semi-dwarf plants facilitates high grain yield since these varieties possess high levels of lodging resistance, and thus could support the high grain weight. Although this approach has been successful in increasing grain yield, it is desirable to further improve grain production and also to breed for high biomass. In this study, we re-examined the effect of GA on rice lodging resistance and biomass yield using several GA-deficient mutants (e.g. having defects in the biosynthesis or perception of GA), and high-GA producing line or mutant. GA-deficient mutants displayed improved bending-type lodging resistance due to their short stature; however they showed reduced breaking-type lodging resistance and reduced total biomass. In plants producing high amounts of GA, the bending-type lodging resistance was inferior to the original cultivars. The breaking-type lodging resistance was improved due to increased lignin accumulation and/or larger culm diameters. Further, these lines had an increase in total biomass weight. These results show that the use of rice cultivars producing high levels of GA would be a novel approach to create higher lodging resistance and biomass.
Characterization and classification of South American land cover types using satellite data
NASA Technical Reports Server (NTRS)
Townshend, J. R. G.; Justice, C. O.; Kalb, V.
1987-01-01
Various methods are compared for carrying out land cover classifications of South America using multitemporal Advanced Very High Resolution Radiometer data. Fifty-two images of the normalized difference vegetation index (NDVI) from a 1-year period are used to generate multitemporal data sets. Three main approaches to land cover classification are considered, namely the use of the principal components transformed images, the use of a characteristic curves procedure based on NDVI values plotted against time, and finally application of the maximum likelihood rule to multitemporal data sets. Comparison of results from training sites indicates that the last approach yields the most accurate results. Despite the reliance on training site figures for performance assessment, the results are nevertheless extremely encouraging, with accuracies for several cover types exceeding 90 per cent.