Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G
2010-09-14
Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).
Xia, Jie; Wu, Daxing; Zhang, Jibiao; Xu, Yuanchao; Xu, Yunxuan
2016-06-01
This study aimed to validate the Chinese version of the Optimism and Pessimism Scale in a sample of 730 adult Chinese individuals. Confirmatory factor analyses confirmed the bidimensionality of the scale with two factors, optimism and pessimism. The total scale and optimism and pessimism factors demonstrated satisfactory reliability and validity. Population-based normative data and mean values for gender, age, and education were determined. Furthermore, we developed a 20-item short form of the Chinese version of the Optimism and Pessimism Scale with structural validity comparable to the full form. In summary, the Chinese version of the Optimism and Pessimism Scale is an appropriate and practical tool for epidemiological research in mainland China. © The Author(s) 2014.
Lima, Viviane Dias; Andia, Irene; Kabakyenga, Jerome; Mbabazi, Pamela; Emenyonu, Nneka; Patterson, Thomas L.; Hogg, Robert S.; Bangsberg, David R.
2013-01-01
The objective of this study was to develop a reliable HAART optimism scale among HIV-positive women in Uganda and to test the scale’s validity against measures of fertility intentions, sexual activity, and unprotected sexual intercourse. We used cross-sectional survey data of 540 women (18–50 years) attending Mbarara University’s HIV clinic in Uganda. Women were asked how much they agreed or disagreed with 23 statements about HAART. Data were subjected to a principal components and factor analyses. Subsequently, we tested the association between the scale and fertility intentions and sexual behaviour using Wilcoxon rank sum test. Factor analysis yielded three factors, one of which was an eight-item HAART optimism scale with moderately high internal consistency (α = 0.70). Women who reported that they intended to have (more) children had significantly higher HAART optimism scores (median = 13.5 [IQR: 12–16]) than women who did not intend to have (more) children (median = 10.5 [IQR: 8–12]; P <0.0001). Similarly, women who were sexually active and who reported practicing unprotected sexual intercourse had significantly higher HAART optimism scores than women who were sexually abstinent or who practiced protected sexual intercourse. Our reliable and valid scale, termed the Women’s HAART Optimism Monitoring and EvaluatioN scale (WHOMEN’s scale), may be valuable to broader studies investigating the role of HAART optimism on reproductive intentions and sexual behaviours of HIV-positive women in high HIV prevalence settings. PMID:19387819
Optimal transport and the placenta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Simon; Xia, Qinglan; Salafia, Carolym
2010-01-01
The goal of this paper is to investigate the expected effects of (i) placental size, (ii) placental shape and (iii) the position of insertion of the umbilical cord on the work done by the foetus heart in pumping blood across the placenta. We use optimal transport theory and modeling to quantify the expected effects of these factors . Total transport cost and the shape factor contribution to cost are given by the optimal transport model. Total placental transport cost is highly correlated with birth weight, placenta weight, FPR and the metabolic scaling factor beta. The shape factor is also highlymore » correlated with birth weight, and after adjustment for placental weight, is highly correlated with the metabolic scaling factor beta.« less
ERIC Educational Resources Information Center
Riegel, Lisa A.
2012-01-01
The goal of this research was to explore the construct of academic optimism at the principal level and examine possible explanatory variables for the factors that emerged from the principal academic optimism scale. Academic optimism contains efficacy, trust and academic emphasis (Hoy, Tarter & Woolfolk Hoy, 2006). It has been studied at the…
NASA Astrophysics Data System (ADS)
Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.
2017-01-01
We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.
Underlying construct of empathy, optimism, and burnout in medical students.
Hojat, Mohammadreza; Vergare, Michael; Isenberg, Gerald; Cohen, Mitchell; Spandorfer, John
2015-01-29
This study was designed to explore the underlying construct of measures of empathy, optimism, and burnout in medical students. Three instruments for measuring empathy (Jefferson Scale of Empathy, JSE); Optimism (the Life Orientation Test-Revised, LOT-R); and burnout (the Maslach Burnout Inventory, MBI, which includes three scales of Emotional Exhaustion, Depersonalization, and Personal Accomplishment) were administered to 265 third-year students at Sidney Kimmel (formerly Jefferson) Medical College at Thomas Jefferson University. Data were subjected to factor analysis to examine relationships among measures of empathy, optimism, and burnout in a multivariate statistical model. Factor analysis (principal component with oblique rotation) resulted in two underlying constructs, each with an eigenvalue greater than one. The first factor involved "positive personality attributes" (factor coefficients greater than .58 for measures of empathy, optimism, and personal accomplishment). The second factor involved "negative personality attributes" (factor coefficients greater than .78 for measures of emotional exhaustion, and depersonalization). Results confirmed that an association exists between empathy in the context of patient care and personality characteristics that are conducive to relationship building, and considered to be "positive personality attributes," as opposed to personality characteristics that are considered as "negative personality attributes" that are detrimental to interpersonal relationships. Implications for the professional development of physicians-in-training and in-practice are discussed.
The importance of personality and parental styles on optimism in adolescents.
Zanon, Cristian; Bastianello, Micheline Roat; Pacico, Juliana Cerentini; Hutz, Claudio Simon
2014-01-01
Some studies have suggested that personality factors are important to optimism development. Others have emphasized that family relations are relevant variables to optimism. This study aimed to evaluate the importance of parenting styles to optimism controlling for the variance accounted for by personality factors. Participants were 344 Brazilian high school students (44% male) with mean age of 16.2 years (SD = 1) who answered personality, optimism, responsiveness and demandingness scales. Hierarchical regression analyses were conducted having personality factors (in the first step) and maternal and paternal parenting styles, and demandingness and responsiveness (in the second step) as predictive variables and optimism as the criterion. Personality factors, especially neuroticism (β = -.34, p < .01), extraversion (β = .26, p < .01) and agreeableness (β = .16, p < .01), accounted for 34% of the optimism variance and insignificant variance was predicted exclusively by parental styles (1%). These findings suggest that personality is more important to optimism development than parental styles.
Coping with occupational stress: the role of optimism and coping flexibility.
Reed, Daniel J
2016-01-01
The current study aimed at measuring whether coping flexibility is a reliable and valid construct in a UK sample and subsequently investigating the association between coping flexibility, optimism, and psychological health - measured by perceived stress and life satisfaction. A UK university undergraduate student sample (N=95) completed an online questionnaire. The study is among the first to examine the validity and reliability of the English version of a scale measuring coping flexibility in a Western population and is also the first to investigate the association between optimism and coping flexibility. The results revealed that the scale had good reliability overall; however, factor analysis revealed no support for the existing two-factor structure of the scale. Coping flexibility and optimism were found to be strongly correlated, and hierarchical regression analyses revealed that the interaction between them predicted a large proportion of the variance in both perceived stress and life satisfaction. In addition, structural equation modeling revealed that optimism completely mediated the relationship between coping flexibility and both perceived stress and life satisfaction. The findings add to the occupational stress literature to further our understanding of how optimism is important in psychological health. Furthermore, given that optimism is a personality trait, and consequently relatively stable, the study also provides preliminary support for the potential of targeting coping flexibility to improve psychological health in Western populations. These findings must be replicated, and further analyses of the English version of the Coping Flexibility Scale are needed.
Assessing pretreatment reactor scaling through empirical analysis
Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik; ...
2016-10-10
Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was within the near-optimal space for total sugar yield for the LHR. This indicates that the ASE is a good tool for cost effectively finding near-optimal conditions for operating pilot-scale systems, which may be used as starting points for further optimization. Additionally, using a severity-factor approach to optimization was found to be inadequate compared to a multivariate optimization method. As a result, the ASE and the LHR were able to enable significantly higher total sugar yields after enzymatic hydrolysis relative to the ZCR, despite having similar optimal conditions and total xylose yields. This underscores the importance of incorporating mechanical disruption into pretreatment reactor designs to achieve high enzymatic digestibilities.« less
Assessing pretreatment reactor scaling through empirical analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik
Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was within the near-optimal space for total sugar yield for the LHR. This indicates that the ASE is a good tool for cost effectively finding near-optimal conditions for operating pilot-scale systems, which may be used as starting points for further optimization. Additionally, using a severity-factor approach to optimization was found to be inadequate compared to a multivariate optimization method. As a result, the ASE and the LHR were able to enable significantly higher total sugar yields after enzymatic hydrolysis relative to the ZCR, despite having similar optimal conditions and total xylose yields. This underscores the importance of incorporating mechanical disruption into pretreatment reactor designs to achieve high enzymatic digestibilities.« less
Underlying construct of empathy, optimism, and burnout in medical students
Vergare, Michael; Isenberg, Gerald; Cohen, Mitchell; Spandorfer, John
2015-01-01
Objectives This study was designed to explore the underlying construct of measures of empathy, optimism, and burnout in medical students. Methods Three instruments for measuring empathy (Jefferson Scale of Empathy, JSE); Optimism (the Life Orientation Test-Revised, LOT-R); and burnout (the Maslach Burnout Inventory, MBI, which includes three scales of Emotional Exhaustion, Depersonalization, and Personal Accomplishment) were administered to 265 third-year students at Sidney Kimmel (formerly Jefferson) Medical College at Thomas Jefferson University. Data were subjected to factor analysis to examine relationships among measures of empathy, optimism, and burnout in a multivariate statistical model. Results Factor analysis (principal component with oblique rotation) resulted in two underlying constructs, each with an eigenvalue greater than one. The first factor involved “positive personality attributes” (factor coefficients greater than .58 for measures of empathy, optimism, and personal accomplishment). The second factor involved “negative personality attributes” (factor coefficients greater than .78 for measures of emotional exhaustion, and depersonalization). Conclusions Results confirmed that an association exists between empathy in the context of patient care and personality characteristics that are conducive to relationship building, and considered to be “positive personality attributes,” as opposed to personality characteristics that are considered as “negative personality attributes” that are detrimental to interpersonal relationships. Implications for the professional development of physicians-in-training and in-practice are discussed. PMID:25633650
Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm.
Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel
2016-01-01
The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.
Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm
Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel
2016-01-01
The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function. PMID:27893845
Cork, Randy D.; Detmer, William M.; Friedman, Charles P.
1998-01-01
This paper describes details of four scales of a questionnaire—“Computers in Medical Care”—measuring attributes of computer use, self-reported computer knowledge, computer feature demand, and computer optimism of academic physicians. The reliability (i.e., precision, or degree to which the scale's result is reproducible) and validity (i.e., accuracy, or degree to which the scale actually measures what it is supposed to measure) of each scale were examined by analysis of the responses of 771 full-time academic physicians across four departments at five academic medical centers in the United States. The objectives of this paper were to define the psychometric properties of the scales as the basis for a future demonstration study and, pending the results of further validity studies, to provide the questionnaire and scales to the medical informatics community as a tool for measuring the attitudes of health care providers. Methodology: The dimensionality of each scale and degree of association of each item with the attribute of interest were determined by principal components factor analysis with othogonal varimax rotation. Weakly associated items (factor loading <.40) were deleted. The reliability of each resultant scale was computed using Cronbach's alpha coefficient. Content validity was addressed during scale construction; construct validity was examined through factor analysis and by correlational analyses. Results: Attributes of computer use, computer knowledge, and computer optimism were unidimensional, with the corresponding scales having reliabilities of.79,.91, and.86, respectively. The computer-feature demand attribute differentiated into two dimensions: the first reflecting demand for high-level functionality with reliability of.81 and the second demand for usability with reliability of.69. There were significant positive correlations between computer use, computer knowledge, and computer optimism scale scores and respondents' hands-on computer use, computer training, and self-reported computer sophistication. In addition, items posited on the computer knowledge scale to be more difficult generated significantly lower scores. Conclusion: The four scales of the questionnaire appear to measure with adequate reliability five attributes of academic physicians' attitudes toward computers in medical care: computer use, self-reported computer knowledge, demand for computer functionality, demand for computer usability, and computer optimism. Results of initial validity studies are positive, but further validation of the scales is needed. The URL of a downloadable HTML copy of the questionnaire is provided. PMID:9524349
Optimal topologies for maximizing network transmission capacity
NASA Astrophysics Data System (ADS)
Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.
2018-04-01
It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
ERIC Educational Resources Information Center
Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao
2016-01-01
In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…
Hendry, Melissa C; Douglas, Kevin S; Winter, Elizabeth A; Edens, John F
2013-01-01
Much of the risk assessment literature has focused on the predictive validity of risk assessment tools. However, these tools often comprise a list of risk factors that are themselves complex constructs, and focusing on the quality of measurement of individual risk factors may improve the predictive validity of the tools. The present study illustrates this concern using the Antisocial Features and Aggression scales of the Personality Assessment Inventory (Morey, 1991). In a sample of 1,545 prison inmates and offenders undergoing treatment for substance abuse (85% male), we evaluated (a) the factorial validity of the ANT and AGG scales, (b) the utility of original ANT and AGG scales and newly derived ANT and AGG scales for predicting antisocial outcomes (recidivism and institutional infractions), and (c) whether items with a stronger relationship to the underlying constructs (higher factor loadings) were in turn more strongly related to antisocial outcomes. Confirmatory factor analyses (CFAs) indicated that ANT and AGG items were not structured optimally in these data in terms of correspondence to the subscale structure identified in the PAI manual. Exploratory factor analyses were conducted on a random split-half of the sample to derive optimized alternative factor structures, and cross-validated in the second split-half using CFA. Four-factor models emerged for both the ANT and AGG scales, and, as predicted, the size of item factor loadings was associated with the strength with which items were associated with institutional infractions and community recidivism. This suggests that the quality by which a construct is measured is associated with its predictive strength. Implications for risk assessment are discussed. Copyright © 2013 John Wiley & Sons, Ltd.
Yang, Wenhui; Xiong, Ge; Garrido, Luis Eduardo; Zhang, John X; Wang, Meng-Cheng; Wang, Chong
2018-04-16
We systematically examined the factor structure and criterion validity across the full scale and 10 short forms of the Center for Epidemiological Studies Depression Scale (CES-D) with Chinese youth. Participants were 5,434 Chinese adolescents in Grades 7 to 12 who completed the full CES-D; 612 of them further completed a structured diagnostic interview with the major depressive disorder (MDD) module of the Kiddie Schedule for Affective Disorder and Schizophrenia for School-age Children. Using a split-sample approach, a series of 4-, 3-, 2-, and 1-factor models were tested using exploratory structural equation modeling and cross-validated using confirmatory factor analysis; the dimensionality was also evaluated by parallel analysis in conjunction with the scree test and aided by factor mixture analysis. The results indicated that a single-factor model of depression with a wording method factor fitted the data well, and was the optimal structure underlying the scores of the full and shortened CES-D. Additionally, receiver operating characteristic curve analyses for MDD case detection showed that the CES-D full-scale scores accurately detected MDD youth (area under the curve [AUC] = .84). Furthermore, the short-form scores produced comparable AUCs with the full scale (.82 to .85), as well as similar levels of sensitivity and specificity when using optimal cutoffs. These findings suggest that depression among Chinese adolescents can be adequately measured and screened for by a single-factor structure underlying the CES-D scores, and that the short forms provide a viable alternative to the full instrument. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.
Function Invariant and Parameter Scale-Free Transformation Methods
ERIC Educational Resources Information Center
Bentler, P. M.; Wingard, Joseph A.
1977-01-01
A scale-invariant simple structure function of previously studied function components for principal component analysis and factor analysis is defined. First and second partial derivatives are obtained, and Newton-Raphson iterations are utilized. The resulting solutions are locally optimal and subjectively pleasing. (Author/JKS)
Modeling process-structure-property relationships for additive manufacturing
NASA Astrophysics Data System (ADS)
Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Yu, Cheng; Liu, Zeliang; Lian, Yanping; Wolff, Sarah; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam
2018-02-01
This paper presents our latest work on comprehensive modeling of process-structure-property relationships for additive manufacturing (AM) materials, including using data-mining techniques to close the cycle of design-predict-optimize. To illustrate the processstructure relationship, the multi-scale multi-physics process modeling starts from the micro-scale to establish a mechanistic heat source model, to the meso-scale models of individual powder particle evolution, and finally to the macro-scale model to simulate the fabrication process of a complex product. To link structure and properties, a highefficiency mechanistic model, self-consistent clustering analyses, is developed to capture a variety of material response. The model incorporates factors such as voids, phase composition, inclusions, and grain structures, which are the differentiating features of AM metals. Furthermore, we propose data-mining as an effective solution for novel rapid design and optimization, which is motivated by the numerous influencing factors in the AM process. We believe this paper will provide a roadmap to advance AM fundamental understanding and guide the monitoring and advanced diagnostics of AM processing.
Liu, Jiao; Gong, Da-Xin; Zeng, Yu; Li, Zhen-Hua; Kong, Chui-Ze
2018-01-01
Quality of life and positive psychological variables has become a focus of concern in patients with renal carcinoma. However, the integrative effects of positive psychological variables on the illness have seldom been reported. The aims of this study were to evaluate the quality of life and the integrative effects of hope, resilience and optimism on the quality of life among Chinese renal carcinoma patients. A cross-sectional study was conducted at the First Hospital of China Medical University. 284 participants completed questionnaires consisting of demographic and clinical characteristics, EORTC QLQ-C30, Adult Hope Scale, Resilience Scale-14 and Life Orientation Scale-Revised from July 2013 to July 2014. Pearson's correlation and hierarchical regression analyses were performed to explore the effects of related factors. Hope, resilience and optimism were significantly associated with quality of life. Hierarchical regression analyses indicated that hope, resilience and optimism as a whole accounted for 9.8, 24.4 and 21.9% of the variance in the global health status, functioning status and symptom status, respectively. The low level of quality of life for Chinese renal carcinoma patients should receive more attention from Chinese medical institutions. Psychological interventions to increase hope, resilience and optimism may be essential to enhancing the quality of life of Chinese cancer patients.
Positive psychological determinants of treatment adherence among primary care patients.
Nsamenang, Sheri A; Hirsch, Jameson K
2015-07-01
Patient adherence to medical treatment recommendations can affect disease prognosis, and may be beneficially or deleteriously influenced by psychological factors. Aim We examined the relationships between both adaptive and maladaptive psychological factors and treatment adherence among a sample of primary care patients. One hundred and one rural, primary care patients completed the Life Orientation Test-Revised, Trait Hope Scale, Future Orientation Scale, NEO-FFI Personality Inventory (measuring positive and negative affect), and Medical Outcomes Study General Adherence Scale. In independent models, positive affect, optimism, hope, and future orientation were beneficially associated with treatment adherence, whereas pessimism and negative affect were negatively related to adherence. In multivariate models, only negative affect, optimism and hope remained significant and, in a comparative model, trait hope was most robustly associated with treatment adherence. Therapeutically, addressing negative emotions and expectancies, while simultaneously bolstering motivational and goal-directed attributes, may improve adherence to treatment regimens.
Selecting a proper design period for heliostat field layout optimization using Campo code
NASA Astrophysics Data System (ADS)
Saghafifar, Mohammad; Gadalla, Mohamed
2016-09-01
In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.
SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Tian, Z; Song, T
Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less
Mörtberg, Ewa; Reuterskiöld, Lena; Tillfors, Maria; Furmark, Tomas; Öst, Lars-Göran
2017-06-01
Culturally validated rating scales for social anxiety disorder (SAD) are of significant importance when screening for the disorder, as well as for evaluating treatment efficacy. This study examined construct validity and additional psychometric properties of two commonly used scales, the Social Phobia Scale and the Social Interaction Anxiety Scale, in a clinical SAD population (n = 180) and in a normal population (n = 614) in Sweden. Confirmatory factor analyses of previously reported factor solutions were tested but did not reveal acceptable fit. Exploratory factor analyses (EFA) of the joint structure of the scales in the total population yielded a two-factor model (performance anxiety and social interaction anxiety), whereas EFA in the clinical sample revealed a three-factor solution, a social interaction anxiety factor and two performance anxiety factors. The SPS and SIAS showed good to excellent internal consistency, and discriminated well between patients with SAD and a normal population sample. Both scales showed good convergent validity with an established measure of SAD, whereas the discriminant validity of symptoms of social anxiety and depression could not be confirmed. The optimal cut-off score for SPS and SIAS were 18 and 22 points, respectively. It is concluded that the factor structure and the additional psychometric properties of SPS and SIAS support the use of the scales for assessment in a Swedish population.
Visions about Future: A New Scale Assessing Optimism, Pessimism, and Hope in Adolescents
ERIC Educational Resources Information Center
Ginevra, Maria Cristina; Sgaramella, Teresa Maria; Ferrari, Lea; Nota, Laura; Santilli, Sara; Soresi, Salvatore
2017-01-01
This article reports the development and psychometric properties of visions about future (VAF), an instrument assessing hope, optimism, and pessimism. Three different studies involving Italian adolescents were conducted. With the first study 22 items were developed and the factor structure was verified. The second study, involving a second sample…
Capone, Vincenza; Petrillo, Giovanna
2014-06-01
In two studies we constructed and validated the Patient's Communication Perceived Self-efficacy Scale (PCSS) designed to assess patients' beliefs about their capability to successfully manage problematic situations related to communication with doctor. The 20-item scale was administered to 179 outpatients (study 1). An Exploratory Factor Analysis revealed a three-factor solution. In study 2, the 16-item scale was administered to 890 outpatients. Exploratory and Confirmatory Factor Analyses supported the 3-factor solution (Provide and Collect information, Express concerns and doubts, Verify information) that showed good psychometric properties and was invariant for gender. PCSS is an easily administered, reliable, and valid test of patients' communication self-efficacy beliefs. It can be applied optimally in the empirical study of factors influencing doctor-patient communication and used in training aimed at strengthening patients' communication skills. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Sub-optimal parenting is associated with schizotypic and anxiety personality traits in adulthood.
Giakoumaki, S G; Roussos, P; Zouraraki, C; Spanoudakis, E; Mavrikaki, M; Tsapakis, E M; Bitsios, P
2013-05-01
Part of the variation in personality characteristics has been attributed to the child-parent interaction and sub-optimal parenting has been associated with psychiatric morbidity. In the present study, an extensive battery of personality scales (Trait Anxiety Inventory, Behavioural Inhibition/Activation System questionnaire, Eysenck Personality Questionnaire-Revised, Temperament and Character Inventory, Schizotypal Traits Questionnaire, Toronto Alexithymia Scale) and the Parental Bonding Instrument (PBI) were administered in 324 adult healthy males to elucidate the effects of parenting on personality configuration. Personality variables were analysed using Principal Component Analysis (PCA) and the factors "Schizotypy", "Anxiety", "Behavioural activation", "Novelty seeking" and "Reward dependence" were extracted. Associations between personality factors with PBI "care" and "overprotection" scores were examined with regression analyses. Subjects were divided into "parental style" groups and personality factors were subjected to categorical analyses. "Schizotypy" and "Anxiety" were significantly predicted by high maternal overprotection and low paternal care. In addition, the Affectionless control group (low care/high overprotection) had higher "Schizotypy" and "Anxiety" compared with the Optimal Parenting group (high care/low overprotection). These results further validate sub-optimal parenting as an important environmental exposure and extend our understanding on the mechanisms by which it increases risk for psychiatric morbidity. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Sharif Nia, Hamid; Pahlevan Sharif, Saeed; Boyle, Christopher; Yaghoobzadeh, Ameneh; Tahmasbi, Bahram; Rassool, G Hussein; Taebei, Mozhgan; Soleimani, Mohammad Ali
2018-04-01
This study aimed to determine the factor structure of the spiritual well-being among a sample of the Iranian veterans. In this methodological research, 211 male veterans of Iran-Iraq warfare completed the Paloutzian and Ellison spiritual well-being scale. Maximum likelihood (ML) with oblique rotation was used to assess domain structure of the spiritual well-being. The construct validity of the scale was assessed using confirmatory factor analysis (CFA), convergent validity, and discriminant validity. Reliability was evaluated with Cronbach's alpha, Theta (θ), and McDonald Omega (Ω) coefficients, intra-class correlation coefficient (ICC), and construct reliability (CR). Results of ML and CFA suggested three factors which were labeled "relationship with God," "belief in fate and destiny," and "life optimism." The ICC, coefficients of the internal consistency, and CR were >.7 for the factors of the scale. Convergent validity and discriminant validity did not fulfill the requirements. The Persian version of spiritual well-being scale demonstrated suitable validity and reliability among the veterans of Iran-Iraq warfare.
Optimally stopped variational quantum algorithms
NASA Astrophysics Data System (ADS)
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
Shao, Xuan; Ran, Li-Yuan; Liu, Chang; Chen, Xiu-Lan; Zhang, Xi-Ying; Qin, Qi-Long; Zhou, Bai-Cheng; Zhang, Yu-Zhong
2015-06-29
The protease myroilysin is the most abundant protease secreted by marine sedimental bacterium Myroides profundi D25. As a novel elastase of the M12 family, myroilysin has high elastin-degrading activity and strong collagen-swelling ability, suggesting its promising biotechnological potential. Because myroilysin cannot be maturely expressed in Escherichia coli, it is important to be able to improve the production of myroilysin in the wild strain D25. We optimized the culture conditions of strain D25 for protease production by using single factor experiments. Under the optimized conditions, the protease activity of strain D25 reached 1137 ± 53.29 U/mL, i.e., 174% of that before optimization (652 ± 23.78 U/mL). We then conducted small scale fermentations of D25 in a 7.5 L fermentor. The protease activity of strain D25 in small scale fermentations reached 1546.4 ± 82.65 U/mL after parameter optimization. Based on the small scale fermentation results, we further conducted pilot scale fermentations of D25 in a 200 L fermentor, in which the protease production of D25 reached approximately 1100 U/mL. These results indicate that we successfully set up the small and pilot scale fermentation processes of strain D25 for myroilysin production, which should be helpful for the industrial production of myroilysin and the development of its biotechnological potential.
Weaver, Christopher
2011-01-01
This study presents a systematic investigation concerning the performance of different rating scales used in the English section of a university entrance examination to assess 1,287 Japanese test takers' ability to write a third-person introduction speech. Although the rating scales did not conform to all of the expectations of the Rasch model, they successfully defined a meaningful continuum of English communicative competence. In some cases, the expectations of the Rasch model needed to be weighed against the specific assessment needs of the university entrance examination. This investigation also found that the degree of compatibility between the number of points allotted to the different rating scales and the various requirements of an introduction speech played a considerable role in determining the extent to which the different rating scales conformed to the expectations of the Rasch model. Compatibility thus becomes an important factor to consider for optimal rating scale performance.
NASA Astrophysics Data System (ADS)
Peng, Yu; Wang, Qinghui; Fan, Min
2017-11-01
When assessing re-vegetation project performance and optimizing land management, identification of the key ecological factors inducing vegetation degradation has crucial implications. Rainfall, temperature, elevation, slope, aspect, land use type, and human disturbance are ecological factors affecting the status of vegetation index. However, at different spatial scales, the key factors may vary. Using Helin County, Inner-Mongolia, China as the study site and combining remote sensing image interpretation, field surveying, and mathematical methods, this study assesses key ecological factors affecting vegetation degradation under different spatial scales in a semi-arid agro-pastoral ecotone. It indicates that the key factors are different at various spatial scales. Elevation, rainfall, and temperature are identified as crucial for all spatial extents. Elevation, rainfall and human disturbance are key factors for small-scale quadrats of 300 m × 300 m and 600 m × 600 m, temperature and land use type are key factors for a medium-scale quadrat of 1 km × 1 km, and rainfall, temperature, and land use are key factors for large-scale quadrats of 2 km × 2 km and 5 km × 5 km. For this region, human disturbance is not the key factor for vegetation degradation across spatial scales. It is necessary to consider spatial scale for the identification of key factors determining vegetation characteristics. The eco-restoration programs at various spatial scales should identify key influencing factors according their scales so as to take effective measurements. The new understanding obtained in this study may help to explore the forces which driving vegetation degradation in the degraded regions in the world.
NASA Astrophysics Data System (ADS)
Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao
2018-03-01
Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.
Cognitive Abilities Explain Wording Effects in the Rosenberg Self-Esteem Scale.
Gnambs, Timo; Schroeders, Ulrich
2017-12-01
There is consensus that the 10 items of the Rosenberg Self-Esteem Scale (RSES) reflect wording effects resulting from positively and negatively keyed items. The present study examined the effects of cognitive abilities on the factor structure of the RSES with a novel, nonparametric latent variable technique called local structural equation models. In a nationally representative German large-scale assessment including 12,437 students competing measurement models for the RSES were compared: a bifactor model with a common factor and a specific factor for all negatively worded items had an optimal fit. Local structural equation models showed that the unidimensionality of the scale increased with higher levels of reading competence and reasoning, while the proportion of variance attributed to the negatively keyed items declined. Wording effects on the factor structure of the RSES seem to represent a response style artifact associated with cognitive abilities.
Singh, Santosh K; Singh, Sanjay K; Tripathi, Vinayak R; Khare, Sunil K; Garg, Satyendra K
2011-12-28
Production of alkaline protease from various bacterial strains using statistical methods is customary now-a-days. The present work is first attempt for the production optimization of a solvent stable thermoalkaline protease by a psychrotrophic Pseudomonas putida isolate using conventional, response surface methods, and fermentor level optimization. The pre-screening medium amended with optimized (w/v) 1.0% glucose, 2.0% gelatin and 0.5% yeast extract, produced 278 U protease ml(-1) at 72 h incubation. Enzyme production increased to 431 Uml(-1) when Mg2+ (0.01%, w/v) was supplemented. Optimization of physical factors further enhanced protease to 514 Uml(-1) at pH 9.0, 25°C and 200 rpm within 60 h. The combined effect of conventionally optimized variables (glucose, yeast extract, MgSO4 and pH), thereafter predicted by response surface methodology yielded 617 U protease ml(-1) at glucose 1.25% (w/v), yeast extract 0.5% (w/v), MgSO4 0.01% (w/v) and pH 8.8. Bench-scale bioreactor level optimization resulted in enhanced production of 882 U protease ml(-1) at 0.8 vvm aeration and 150 rpm agitation during only 48 h incubation. The optimization of fermentation variables using conventional, statistical approaches and aeration/agitation at fermentor level resulted in ~13.5 folds increase (882 Uml(-1)) in protease production compared to un-optimized conditions (65 Uml(-1)). This is the highest level of thermoalkaline protease reported so far by any psychrotrophic bacterium.
Optimal flow for brown trout: Habitat - prey optimization.
Fornaroli, Riccardo; Cabrini, Riccardo; Sartori, Laura; Marazzi, Francesca; Canobbio, Sergio; Mezzanotte, Valeria
2016-10-01
The correct definition of ecosystem needs is essential in order to guide policy and management strategies to optimize the increasing use of freshwater by human activities. Commonly, the assessment of the optimal or minimum flow rates needed to preserve ecosystem functionality has been done by habitat-based models that define a relationship between in-stream flow and habitat availability for various species of fish. We propose a new approach for the identification of optimal flows using the limiting factor approach and the evaluation of basic ecological relationships, considering the appropriate spatial scale for different organisms. We developed density-environment relationships for three different life stages of brown trout that show the limiting effects of hydromorphological variables at habitat scale. In our analyses, we found that the factors limiting the densities of trout were water velocity, substrate characteristics and refugia availability. For all the life stages, the selected models considered simultaneously two variables and implied that higher velocities provided a less suitable habitat, regardless of other physical characteristics and with different patterns. We used these relationships within habitat based models in order to select a range of flows that preserve most of the physical habitat for all the life stages. We also estimated the effect of varying discharge flows on macroinvertebrate biomass and used the obtained results to identify an optimal flow maximizing habitat and prey availability. Copyright © 2016 Elsevier B.V. All rights reserved.
Sumiyoshi, Chika; Uetsuki, Miki; Suga, Motomu; Kasai, Kiyoto; Sumiyoshi, Tomiki
2013-12-30
Short forms (SF) of the Wechsler Intelligence Scale have been developed to enhance its practicality. However, only a few studies have addressed the Wechsler Intelligence Scale Revised (WAIS-R) SFs based on data from patients with schizophrenia. The current study was conducted to develop the WAIS-R SFs for these patients based on the intelligence structure and predictability of the Full IQ (FIQ). Relations to demographic and clinical variables were also examined on selecting plausible subtests. The WAIS-R was administered to 90 Japanese patients with schizophrenia. Exploratory factor analysis (EFA) and multiple regression analysis were conducted to find potential subtests. EFA extracted two dominant factors corresponding to Verbal IQ and Performance IQ measures. Subtests with higher factor loadings on those factors were initially nominated. Regression analysis was carried out to reach the model containing all the nominated subtests. The optimality of the potential subtests included in that model was evaluated from the perspectives of the representativeness of intelligence structure, FIQ predictability, and the relation with demographic and clinical variables. Taken together, the dyad of Vocabulary and Block Design was considered to be the most optimal WAIS-R SF for patients with schizophrenia, reflecting both intelligence structure and FIQ predictability. © 2013 Elsevier Ireland Ltd. All rights reserved.
Sexual Sensation Seeking: A Validated Scale for Spanish Gay, Lesbian and Bisexual People.
Gil-Llario, María Dolores; Morell-Mengual, Vicente; Giménez-García, Cristina; Salmerón-Sánchez, Pedro; Ballester-Arnal, Rafael
2018-06-07
Sexual Sensation Seeking has been identified as a main predictor of unsafe sex that particularly affects LGB people. This study adapts and validates the Sexual Sensation Seeking Scale to Spanish LGB people. For this purpose, we tested the factor structure in 1237 people, ranged from 17 to 60 years old, 880 self-defined as homosexuals and 357 as bisexuals. The results support the appropriateness of this scale for Spanish LGB people and determine two factors, explaining the 49.91% of variance: "physical sensations attraction" and "sexual experiences". Our findings reveal optimal levels of internal consistency in the total scale (α = 0.81) and each factor (α = 0.84 and α = 0.71). Additional analyses have demonstrated convergent validity for this scale. Important implications of the validated Sexual Sensation Seeking Scale in Spanish LGB people are discussed, in order to early detection and preventive interventions for HIV and other sexual health problems.
Jing, Liang; Chen, Bing; Wen, Diya; Zheng, Jisi; Zhang, Baiyu
2017-12-01
This study shed light on removing atrazine from pesticide production wastewater using a pilot-scale UV/O 3 /ultrasound flow-through system. A significant quadratic polynomial prediction model with an adjusted R 2 of 0.90 was obtained from central composite design with response surface methodology. The optimal atrazine removal rate (97.68%) was obtained at the conditions of 75 W UV power, 10.75 g h -1 O 3 flow rate and 142.5 W ultrasound power. A Monte Carlo simulation aided artificial neural networks model was further developed to quantify the importance of O 3 flow rate (40%), UV power (30%) and ultrasound power (30%). Their individual and interaction effects were also discussed in terms of reaction kinetics. UV and ultrasound could both enhance the decomposition of O 3 and promote hydroxyl radical (OH·) formation. Nonetheless, the dose of O 3 was the dominant factor and must be optimized because excess O 3 can react with OH·, thereby reducing the rate of atrazine degradation. The presence of other organic compounds in the background matrix appreciably inhibited the degradation of atrazine, while the effects of Cl - , CO 3 2- and HCO 3 - were comparatively negligible. It was concluded that the optimization of system performance using response surface methodology and neural networks would be beneficial for scaling up the treatment by UV/O 3 /ultrasound at industrial level. Copyright © 2017 Elsevier Ltd. All rights reserved.
Haddad, Mark; Waqas, Ahmed; Sukhera, Ahmed Bashir; Tarar, Asad Zaman
2017-07-27
Depression is common mental health problem and leading contributor to the global burden of disease. The attitudes and beliefs of the public and of health professionals influence social acceptance and affect the esteem and help-seeking of people experiencing mental health problems. The attitudes of clinicians are particularly relevant to their role in accurately recognising and providing appropriate support and management of depression. This study examines the characteristics of the revised depression attitude questionnaire (R-DAQ) with doctors working in healthcare settings in Lahore, Pakistan. A cross-sectional survey was conducted in 2015 using the revised depression attitude questionnaire (R-DAQ). A convenience sample of 700 medical practitioners based in six hospitals in Lahore was approached to participate in the survey. The R-DAQ structure was examined using Parallel Analysis from polychoric correlations. Unweighted least squares analysis (ULSA) was used for factor extraction. Model fit was estimated using goodness-of-fit indices and the root mean square of standardized residuals (RMSR), and internal consistency reliability for the overall scale and subscales was assessed using reliability estimates based on Mislevy and Bock (BILOG 3 Item analysis and test scoring with binary logistic models. Mooresville: Scientific Software, 55) and the McDonald's Omega statistic. Findings using this approach were compared with principal axis factor analysis based on Pearson correlation matrix. 601 (86%) of the doctors approached consented to participate in the study. Exploratory factor analysis of R-DAQ scale responses demonstrated the same 3-factor structure as in the UK development study, though analyses indicated removal of 7 of the 22 items because of weak loading or poor model fit. The 3 factor solution accounted for 49.8% of the common variance. Scale reliability and internal consistency were adequate: total scale standardised alpha was 0.694; subscale reliability for professional confidence was 0.732, therapeutic optimism/pessimism was 0.638, and generalist perspective was 0.769. The R-DAQ was developed with a predominantly UK-based sample of health professionals. This study indicates that this scale functions adequately and provides a valid measure of depression attitudes for medical practitioners in Pakistan, with the same factor structure as in the scale development sample. However, optimal scale function necessitated removal of several items, with a 15-item scale enabling the most parsimonious factor solution for this population.
Inamdar, Shrirang Appasaheb; Surwase, Shripad Nagnath; Jadhav, Shekhar Bhagwan; Bapat, Vishwas Anant; Jadhav, Jyoti Prafull
2013-01-01
L-DOPA (3,4-dihydroxyphenyl-L-alanine), a modified amino acid, is an expansively used drug for the Parkinson's disease treatment. In the present study, optimization of nutritional parameters influencing L-DOPA production was attempted using the response surface methodology (RSM) from Mucuna monosperma callus. Optimization of the four factors was carried out using the Box-Behnken design. The optimized levels of factors predicted by the model include tyrosine 0.894 g l(-1), pH 4.99, ascorbic acid 31.62 mg l(-1)and copper sulphate 23.92 mg l(-1), which resulted in highest L-DOPA yield of 0.309 g l(-1). The optimization of medium using RSM resulted in a 3.45-fold increase in the yield of L-DOPA. The ANOVA analysis showed a significant R (2) value (0.9912), model F-value (112.465) and probability (0.0001), with insignificant lack of fit. Optimized medium was used in the laboratory scale column reactor for continuous production of L-DOPA. Uninterrupted flow column exhibited maximum L-DOPA production rate of 200 mg L(-1) h(-1) which is one of the highest values ever reported using plant as a biotransformation source. L-DOPA production was confirmed by HPTLC and HPLC analysis. This study demonstrates the synthesis of L- DOPA using Mucuna monosperma callus using a laboratory scale column reactor.
Fraker, Christopher A; Mendez, Armando J; Inverardi, Luca; Ricordi, Camillo; Stabler, Cherie L
2012-10-01
Nano-scale emulsification has long been utilized by the food and cosmetics industry to maximize material delivery through increased surface area to volume ratios. More recently, these methods have been employed in the area of biomedical research to enhance and control the delivery of desired agents, as in perfluorocarbon emulsions for oxygen delivery. In this work, we evaluate critical factors for the optimization of PFC emulsions for use in cell-based applications. Cytotoxicity screening revealed minimal cytotoxicity of components, with the exception of one perfluorocarbon utilized for emulsion manufacture, perfluorooctylbromide (PFOB), and specific w% limitations of PEG-based surfactants utilized. We optimized the manufacture of stable nano-scale emulsions via evaluation of: component materials, emulsification time and pressure, and resulting particle size and temporal stability. The initial emulsion size was greatly dependent upon the emulsion surfactant tested, with pluronics providing the smallest size. Temporal stability of the nano-scale emulsions was directly related to the perfluorocarbon utilized, with perfluorotributylamine, FC-43, providing a highly stable emulsion, while perfluorodecalin, PFD, coalesced over time. The oxygen mass transfer, or diffusive permeability, of the resulting emulsions was also characterized. Our studies found particle size to be the critical factor affecting oxygen mass transfer, as increased micelle size resulted in reduced oxygen diffusion. Overall, this work demonstrates the importance of accurate characterization of emulsification parameters in order to generate stable, reproducible emulsions with the desired bio-delivery properties. Copyright © 2012 Elsevier B.V. All rights reserved.
Development of WRF-CO2 4DVAR Data Assimilation System
NASA Astrophysics Data System (ADS)
Zheng, T.; French, N. H. F.
2016-12-01
Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.
Psychological Resources and Self-rated Health Status on Fifty-year-old Women
2015-01-01
Objectives The aim of the study is to expand knowledge about predictors of the self-rated health and mental health in fifty-year-old women. The study exploring links between self-rated mental/health and optimism, self-esteem, acceptance of the changes in physical look and some sociodemographic factors. Methods Participants in this study were 209 women aged 50 to 59. A single-items measures of self-rated health and mental health were used. Self-esteem was measured through the Rosenberg Self-Esteem Scale; optimism through the OPEB questionnaire; acceptance of the changes in physical look was rated by respondents on a seven-point scale. Participants were also asked about weight loss attempts, the amount of leisure time, and going on vacation during the last year. Results Predictors of the self-rated mental health in women in the age range of 50 to 59 were: acceptance of the changes in physical look, self-esteem and optimism. Predictors of the self-rated health were: optimism and acceptance of the changes in physical look. Conclusion Optimism and acceptance of the changes in physical look seem to be important factors that may impact subjective health both physical and mental of women in their 50s. The role of the leisure time and vacation in instilling the subjective health requires further investigation. PMID:26793678
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2010-01-01
Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.
Rahman, Roshanida A; Molla, Abul Hossain; Barghash, Hind F A; Fakhru'l-Razi, Ahmadun
2016-01-01
Liquid-state bioconversion (LSB) technique has great potential for application in bioremediation of sewage sludge. The purpose of this study is to determine the optimum level of LSB process of sewage sludge treatment by mixed fungal (Aspergillus niger and Penicillium corylophilum) inoculation in a pilot-scale bioreactor. The optimization of process factors was investigated using response surface methodology based on Box-Behnken design considering hydraulic retention time (HRT) and substrate influent concentration (S0) on nine responses for optimizing and fitted to the regression model. The optimum region was successfully depicted by optimized conditions, which was identified as the best fit for convenient multiple responses. The results from process verification were in close agreement with those obtained through predictions. Considering five runs of different conditions of HRT (low, medium and high 3.62, 6.13 and 8.27 days, respectively) with the range of S0 value (the highest 12.56 and the lowest 7.85 g L(-1)), it was monitored as the lower HRT was considered as the best option because it required minimum days of treatment than the others with influent concentration around 10 g L(-1). Therefore, optimum process factors of 3.62 days for HRT and 10.12 g L(-1) for S0 were identified as the best fit for LSB process and its performance was deviated by less than 5% in most of the cases compared to the predicted values. The recorded optimized results address a dynamic development in commercial-scale biological treatment of wastewater for safe and environment-friendly disposal in near future.
Chenthamarakshan, Aiswarya; Parambayil, Nayana; Miziriya, Nafeesathul; Soumya, P S; Lakshmi, M S Kiran; Ramgopal, Anala; Dileep, Anuja; Nambisan, Padma
2017-02-13
Fungal laccase has profound applications in different fields of biotechnology due to its broad specificity and high redox potential. Any successful application of the enzyme requires large scale production. As laccase production is highly dependent on medium components and cultural conditions, optimization of the same is essential for efficient product production. Production of laccase by fungal strain Marasmiellus palmivorus LA1 under solid state fermentation was optimized by the Taguchi design of experiments (DOE) methodology. An orthogonal array (L8) was designed using Qualitek-4 software to study the interactions and relative influence of the seven selected factors by one factor at a time approach. The optimum condition formulated was temperature (28 °C), pH (5), galactose (0.8%w/v), cupric sulphate (3 mM), inoculum concentration (number of mycelial agar pieces) (6Nos.) and substrate length (0.05 m). Overall yield increase of 17.6 fold was obtained after optimization. Statistical optimization leads to the elimination of an insignificant medium component ammonium dihydrogen phosphate from the process and contributes to a 1.06 fold increase in enzyme production. A final production of 667.4 ± 13 IU/mL laccase activity paves way for the application of this strain for industrial applications. Study optimized lignin degrading laccases from Marasmiellus palmivorus LA1. This laccases can thus be used for further applications in different scales of production after analyzing the properties of the enzyme. Study also confirmed the use of taguchi method for optimizations of product production.
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Keller, Dennis J.
2002-01-01
The purpose of this study on micro-scale secondary flow control (MSFC) is to study the aerodynamic behavior of micro-vane effectors through their factor (i.e., the design variable) interactions and to demonstrate how these statistical interactions, when brought together in an optimal manner, determine design robustness. The term micro-scale indicates the vane effectors are small in comparison to the local boundary layer height. Robustness in this situation means that it is possible to design fixed MSFC robust installation (i.e.. open loop) which operates well over the range of mission variables and is only marginally different from adaptive (i.e., closed loop) installation design, which would require a control system. The inherent robustness of MSFC micro-vane effector installation designs comes about because of their natural aerodynamic characteristics and the manner in which these characteristics are brought together in an optimal manner through a structured Response Surface Methodology design process.
Haanstra, Tsjitske M.; Tilbury, Claire; Kamper, Steven J.; Tordoir, Rutger L.; Vliet Vlieland, Thea P. M.; Nelissen, Rob G. H. H.; Cuijpers, Pim; de Vet, Henrica C. W.; Dekker, Joost; Knol, Dirk L.; Ostelo, Raymond W.
2015-01-01
Objectives The constructs optimism, pessimism, hope, treatment credibility and treatment expectancy are associated with outcomes of medical treatment. While these constructs are grounded in different theoretical models, they nonetheless show some conceptual overlap. The purpose of this study was to examine whether currently available measurement instruments for these constructs capture the conceptual differences between these constructs within a treatment setting. Methods Patients undergoing Total Hip and Total Knee Arthroplasty (THA and TKA) (Total N = 361; 182 THA; 179 TKA), completed the Life Orientation Test-Revised for optimism and pessimism, the Hope Scale, the Credibility Expectancy Questionnaire for treatment credibility and treatment expectancy. Confirmatory factor analysis was used to examine whether the instruments measure distinct constructs. Four theory-driven models with one, two, four and five latent factors were evaluated using multiple fit indices and Δχ2 tests, followed by some posthoc models. Results The results of the theory driven confirmatory factor analysis showed that a five factor model in which all constructs loaded on separate factors yielded the most optimal and satisfactory fit. Posthoc, a bifactor model in which (besides the 5 separate factors) a general factor is hypothesized accounting for the commonality of the items showed a significantly better fit than the five factor model. All specific factors, except for the hope factor, showed to explain a substantial amount of variance beyond the general factor. Conclusion Based on our primary analyses we conclude that optimism, pessimism, hope, treatment credibility and treatment expectancy are distinguishable in THA and TKA patients. Postdoc, we determined that all constructs, except hope, showed substantial specific variance, while also sharing some general variance. PMID:26214176
Haanstra, Tsjitske M; Tilbury, Claire; Kamper, Steven J; Tordoir, Rutger L; Vliet Vlieland, Thea P M; Nelissen, Rob G H H; Cuijpers, Pim; de Vet, Henrica C W; Dekker, Joost; Knol, Dirk L; Ostelo, Raymond W
2015-01-01
The constructs optimism, pessimism, hope, treatment credibility and treatment expectancy are associated with outcomes of medical treatment. While these constructs are grounded in different theoretical models, they nonetheless show some conceptual overlap. The purpose of this study was to examine whether currently available measurement instruments for these constructs capture the conceptual differences between these constructs within a treatment setting. Patients undergoing Total Hip and Total Knee Arthroplasty (THA and TKA) (Total N = 361; 182 THA; 179 TKA), completed the Life Orientation Test-Revised for optimism and pessimism, the Hope Scale, the Credibility Expectancy Questionnaire for treatment credibility and treatment expectancy. Confirmatory factor analysis was used to examine whether the instruments measure distinct constructs. Four theory-driven models with one, two, four and five latent factors were evaluated using multiple fit indices and Δχ2 tests, followed by some posthoc models. The results of the theory driven confirmatory factor analysis showed that a five factor model in which all constructs loaded on separate factors yielded the most optimal and satisfactory fit. Posthoc, a bifactor model in which (besides the 5 separate factors) a general factor is hypothesized accounting for the commonality of the items showed a significantly better fit than the five factor model. All specific factors, except for the hope factor, showed to explain a substantial amount of variance beyond the general factor. Based on our primary analyses we conclude that optimism, pessimism, hope, treatment credibility and treatment expectancy are distinguishable in THA and TKA patients. Postdoc, we determined that all constructs, except hope, showed substantial specific variance, while also sharing some general variance.
Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics
NASA Astrophysics Data System (ADS)
Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.
2018-01-01
Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.
Christensen, Bruce K; Girard, Todd A; Bagby, R Michael
2007-06-01
An eight-subtest short form (SF8) of the Wechsler Adult Intelligence Scale, Third Edition (WAIS-III), maintaining equal representation of each index factor, was developed for use with psychiatric populations. Data were collected from a mixed inpatient/outpatient sample (99 men and 101 women) referred for neuropsychological assessment. Psychometric analyses revealed an optimal SF8 comprising Vocabulary, Similarities, Arithmetic, Digit Span, Picture Completion, Matrix Reasoning, Digit Symbol Coding, and Symbol Search, scored by linear scaling. Expanding on previous short forms, the current SF8 maximizes the breadth of information and reduces administration time while maintaining the original WAIS-III factor structure. (c) 2007 APA, all rights reserved
Jefford, Elaine; Jomeen, Julie; Martin, Colin R
2016-04-28
The ability to act on and justify clinical decisions as autonomous accountable midwifery practitioners, is encompassed within many international regulatory frameworks, yet decision-making within midwifery is poorly defined. Decision-making theories from medicine and nursing may have something to offer, but fail to take into consideration midwifery context and philosophy and the decisional autonomy of women. Using an underpinning qualitative methodology, a decision-making framework was developed, which identified Good Clinical Reasoning and Good Midwifery Practice as two conditions necessary to facilitate optimal midwifery decision-making during 2nd stage labour. This study aims to confirm the robustness of the framework and describe the development of Enhancing Decision-making Assessment in Midwifery (EDAM) as a measurement tool through testing of its factor structure, validity and reliability. A cross-sectional design for instrument development and a 2 (country; Australia/UK) x 2 (Decision-making; optimal/sub-optimal) between-subjects design for instrument evaluation using exploratory and confirmatory factor analysis, internal consistency and known-groups validity. Two 'expert' maternity panels, based in Australia and the UK, comprising of 42 participants assessed 16 midwifery real care episode vignettes using the empirically derived 26 item framework. Each item was answered on a 5 point likert scale based on the level of agreement to which the participant felt each item was present in each of the vignettes. Participants were then asked to rate the overall decision-making (optimal/sub-optimal). Post factor analysis the framework was reduced to a 19 item EDAM measure, and confirmed as two distinct scales of 'Clinical Reasoning' (CR) and 'Midwifery Practice' (MP). The CR scale comprised of two subscales; 'the clinical reasoning process' and 'integration and intervention'. The MP scale also comprised two subscales; women's relationship with the midwife' and 'general midwifery practice'. EDAM would generally appear to be a robust, valid and reliable psychometric instrument for measuring midwifery decision-making, which performs consistently across differing international contexts. The 'women's relationship with midwife' subscale marginally failed to meet the threshold for determining good instrument reliability, which may be due to its brevity. Further research using larger samples and in a wider international context to confirm the veracity of the instrument's measurement properties and its wider global utility, would be advantageous.
Factor Stability of Primary Scales of the General Organization Questionnaire
1980-10-01
leadership , climate , and processes function optimally. The Leadership and Organizational Effectiveness Work Unit re- searches personal, small-group...the Litwin and Stringer (1968) Organizational Climate Questionnaire found a factor structure that was dif- ferent from the a priori structure...number) General Organization Questionnaire (GOQ) Organizational climate Organizational effectiveness 20. ATRACT (Cnm N eriwem7 d Iderntify by block numbst
Regional-scale calculation of the LS factor using parallel processing
NASA Astrophysics Data System (ADS)
Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong
2015-05-01
With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.
Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method
NASA Astrophysics Data System (ADS)
Vasant, Pandian
2011-06-01
Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.
NASA Astrophysics Data System (ADS)
Kuz`michev, V. S.; Filinov, E. P.; Ostapyuk, Ya A.
2018-01-01
This article describes how the thrust level influences the turbojet architecture (types of turbomachines that provide the maximum efficiency) and its working process parameters (turbine inlet temperature (TIT) and overall pressure ratio (OPR)). Functional gasdynamic and strength constraints were included, total mass of fuel and the engine required for mission and the specific fuel consumption (SFC) were considered optimization criteria. Radial and axial turbines and compressors were considered. The results show that as the engine thrust decreases, optimal values of working process parameters decrease too, and the regions of compromise shrink. Optimal engine architecture and values of working process parameters are suggested for turbojets with thrust varying from 100N to 100kN. The results show that for the thrust below 25kN the engine scale factor should be taken into the account, as the low flow rates begin to influence the efficiency of engine elements substantially.
Scaling Optimization of the SIESTA MHD Code
NASA Astrophysics Data System (ADS)
Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan
2013-10-01
SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
Optimal configurations of spatial scale for grid cell firing under noise and uncertainty
Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil
2014-01-01
We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144
Modulation of a methane Bunsen flame by upstream perturbations
NASA Astrophysics Data System (ADS)
de Souza, T. Cardoso; Bastiaans, R. J. M.; De Goey, L. P. H.; Geurts, B. J.
2017-04-01
In this paper the effects of an upstream spatially periodic modulation acting on a turbulent Bunsen flame are investigated using direct numerical simulations of the Navier-Stokes equations coupled with the flamelet generated manifold (FGM) method to parameterise the chemistry. The premixed Bunsen flame is spatially agitated with a set of coherent large-scale structures of specific wave-number, K. The response of the premixed flame to the external modulation is characterised in terms of time-averaged properties, e.g. the average flame height ⟨H⟩ and the flame surface wrinkling ⟨W⟩. Results show that the flame response is notably selective to the size of the length scales used for agitation. For example, both flame quantities ⟨H⟩ and ⟨W⟩ present an optimal response, in comparison with an unmodulated flame, when the modulation scale is set to relatively low wave-numbers, 4π/L ≲ K ≲ 6π/L, where L is a characteristic scale. At the agitation scales where the optimal response is observed, the average flame height, ⟨H⟩, takes a clearly defined minimal value while the surface wrinkling, ⟨W⟩, presents an increase by more than a factor of 2 in comparison with the unmodulated reference case. Combined, these two response quantities indicate that there is an optimal scale for flame agitation and intensification of combustion rates in turbulent Bunsen flames.
Fault diagnosis of rolling element bearing using a new optimal scale morphology analysis method.
Yan, Xiaoan; Jia, Minping; Zhang, Wan; Zhu, Lin
2018-02-01
Periodic transient impulses are key indicators of rolling element bearing defects. Efficient acquisition of impact impulses concerned with the defects is of much concern to the precise detection of bearing defects. However, transient features of rolling element bearing are generally immersed in stochastic noise and harmonic interference. Therefore, in this paper, a new optimal scale morphology analysis method, named adaptive multiscale combination morphological filter-hat transform (AMCMFH), is proposed for rolling element bearing fault diagnosis, which can both reduce stochastic noise and reserve signal details. In this method, firstly, an adaptive selection strategy based on the feature energy factor (FEF) is introduced to determine the optimal structuring element (SE) scale of multiscale combination morphological filter-hat transform (MCMFH). Subsequently, MCMFH containing the optimal SE scale is applied to obtain the impulse components from the bearing vibration signal. Finally, fault types of bearing are confirmed by extracting the defective frequency from envelope spectrum of the impulse components. The validity of the proposed method is verified through the simulated analysis and bearing vibration data derived from the laboratory bench. Results indicate that the proposed method has a good capability to recognize localized faults appeared on rolling element bearing from vibration signal. The study supplies a novel technique for the detection of faulty bearing. Copyright © 2018. Published by Elsevier Ltd.
Stalmach, Małgorzata; Jodkowska, Maria; Tabak, Izabela; Oblacińska, Anna
2013-01-01
To examine the level of optimism in 13-year-olds and the relationship between optimism and self-reported health and family psychosocial and economic factors. Adolescents at the age of 13 years (n=605) and their parents, identified in the third stage of a prospective cohort study in 2008, was analysed. To examine the level of optimism the short Polish version of the Wagnilda and Young scale (Resilience Scale) were used. The level of optimism and the relationship between family socio-economic factors and family functioning in the family (parenting practices, satisfaction with family contacts) were examined. For the evaluation of probability of a high level of optimism among 13-year-old girls and boys the multivariate model of logistic was used. Girls had a significantly higher level of optimism. Girls and boys with positive attitude to life rated their health significantly better than their peers with negative attitude. Univariate analyses showed that with the level of optimism father's education level among girls and the professional status of the parents among boys, was significantly associated. Family affluence reported by children, positive parenting and satisfaction with family contacts, were significantly associated with the attitude to life, both in girls and boys. The level of optimism among boys was also related with the level of discipline by the mother and the level of control exercised by both parents. Finally, for girls multiple regression analyses showed that father's positive parenting was a predictor of high level of optimism [OR=0,45; CI(OR):0,23-0,85; p=0,014]. In boys, mother's positive parenting [OR=0,39; CI(OR):0,19-0,82; p=0,013] and appropriate father's control were found to be significant predictors of optimism [OR=0,33; CI(OR): 0,13-0,84; p=0,020]. Very high self-reported health by the majority of young people with positive attitude to life shows that optimism is a strong predictor of subjective health. Positive parenting practices and good level of parental control, have a significant impact on optimism in teenagers.
Dynamic Factorization in Large-Scale Optimization
1993-03-12
variable production charges, distribution via multiple modes, taxes, duties and duty drawback, and inventory charges. See Harrison, Arntzen , and Brown...Decomposition," presented at CORS/TIMS/ORSA meeting, Vancouver. British Columbia, Canada, May. Harrison, T. P., Arntzen , B. C., and Brown, G. G. 1992
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Almeida, Valmor F; Ye, Xianggui; Cui, Shengting
2013-01-01
A comprehensive molecular dynamics simulation study of n-alkanes using the Optimized Potential for Liquid Simulation-All Atoms (OPLS-AA) force field at ambient condition has been performed. Our results indicate that while simulations with the OPLS-AA force field accurately predict the liquid state mass density for n-alkanes with carbon number equal or less than 10, for n-alkanes with carbon number equal or exceeding 12, the OPLS-AA force field with the standard scaling factor for the 1-4 intramolecular Van der Waals and electrostatic interaction gives rise to a quasi-crystalline structure. We found that accurate predictions of the liquid state properties are obtained bymore » successively reducing the aforementioned scaling factor for each increase of the carbon number beyond n-dodecane. To better un-derstand the effects of reducing the scaling factor, we analyzed the variation of the torsion potential pro-file with the scaling factor, and the corresponding impact on the gauche-trans conformer distribution, heat of vaporization, melting point, and self-diffusion coefficient for n-dodecane. This relatively simple procedure thus allows for more accurate predictions of the thermo-physical properties of longer n-alkanes.« less
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Cognitive Vulnerabilities and Depression in Young Adults: An ROC Curves Analysis.
Balsamo, Michela; Imperatori, Claudio; Sergi, Maria Rita; Belvederi Murri, Martino; Continisio, Massimo; Tamburello, Antonino; Innamorati, Marco; Saggino, Aristide
2013-01-01
Objectives and Methods. The aim of the present study was to evaluate, by means of receiver operating characteristic (ROC) curves, whether cognitive vulnerabilities (CV), as measured by three well-known instruments (the Beck Hopelessness Scale, BHS; the Life Orientation Test-Revised, LOT-R; and the Attitudes Toward Self-Revised, ATS-R), independently discriminate between subjects with different severities of depression. Participants were 467 young adults (336 females and 131 males), recruited from the general population. The subjects were also administered the Beck Depression Inventory-II (BDI-II). Results. Four first-order (BHS Optimism/Low Standard; BHS Pessimism; Generalized Self-Criticism; and LOT Optimism) and two higher-order factors (Pessimism/Negative Attitudes Toward Self, Optimism) were extracted using Principal Axis Factoring analysis. Although all first-order and second-order factors were able to discriminate individuals with different depression severities, the Pessimism factor had the best performance in discriminating individuals with moderate to severe depression from those with lower depression severity. Conclusion. In the screening of young adults at risk of depression, clinicians have to pay particular attention to the expression of pessimism about the future.
Dynamic Factorization in Large-Scale Optimization
1994-01-01
and variable production charges, distribution via multiple modes, taxes, duties and duty draw- back, and inventory charges. See Harrison, Arntzen and...34 Capital allocation and project selection via decomposition:’ presented at CORS/TIMS/ORSA meeting. Vancouver. Be ( 1989). T.P. Harrison. B.C. Arntzen and
A modified priority list-based MILP method for solving large-scale unit commitment problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ke, Xinda; Lu, Ning; Wu, Di
This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problemmore » over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.« less
Zawada, James F; Yin, Gang; Steiner, Alexander R; Yang, Junhao; Naresh, Alpana; Roy, Sushmita M; Gold, Daniel S; Heinsohn, Henry G; Murray, Christopher J
2011-01-01
Engineering robust protein production and purification of correctly folded biotherapeutic proteins in cell-based systems is often challenging due to the requirements for maintaining complex cellular networks for cell viability and the need to develop associated downstream processes that reproducibly yield biopharmaceutical products with high product quality. Here, we present an alternative Escherichia coli-based open cell-free synthesis (OCFS) system that is optimized for predictable high-yield protein synthesis and folding at any scale with straightforward downstream purification processes. We describe how the linear scalability of OCFS allows rapid process optimization of parameters affecting extract activation, gene sequence optimization, and redox folding conditions for disulfide bond formation at microliter scales. Efficient and predictable high-level protein production can then be achieved using batch processes in standard bioreactors. We show how a fully bioactive protein produced by OCFS from optimized frozen extract can be purified directly using a streamlined purification process that yields a biologically active cytokine, human granulocyte-macrophage colony-stimulating factor, produced at titers of 700 mg/L in 10 h. These results represent a milestone for in vitro protein synthesis, with potential for the cGMP production of disulfide-bonded biotherapeutic proteins. Biotechnol. Bioeng. 2011; 108:1570–1578. © 2011 Wiley Periodicals, Inc. PMID:21337337
On Efficient Multigrid Methods for Materials Processing Flows with Small Particles
NASA Technical Reports Server (NTRS)
Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael
2004-01-01
Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.
Ndao, Adama; Sellamuthu, Balasubramanian; Gnepe, Jean R; Tyagi, Rajeshwar D; Valero, Jose R
2017-09-02
Pilot-scale Bacillus thuringiensis based biopesticide production (2000 L bioreactor) was conducted using starch industry wastewater (SIW) as a raw material using optimized operational parameters obtained in 15 L and 150 L fermenters. In pilot scale fermentation process the oxygen transfer rate is a major limiting factor for high product yield. Thus, the volumetric mass transfer coefficient (K L a) remains a tool to determine the oxygen transfer capacity [oxygen utilization rate (OUR) and oxygen transfer rate (OTR)] to obtain better bacterial growth rate and entomotoxicity in new bioreactor process optimization and scale-up. This study results demonstrated that the oxygen transfer rate in 2000 L bioreactor was better than 15 L and 150 L fermenters. The better oxygen transfer in 2000 L bioreactor augmented the bacterial growth [total cell (TC) and viable spore count (SC)] and delta-endotoxin yield. Prepared a stable biopesticide formulation for field use and its entomotoxicity was also evaluated. This study result corroborates the feasibility of industrial scale operation of biopesticide production using starch industry wastewater as raw material.
Geospatial Optimization of Siting Large-Scale Solar Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macknick, Jordan; Quinby, Ted; Caulfield, Emmet
2014-03-01
Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent withmore » each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.« less
NASA Technical Reports Server (NTRS)
1995-01-01
The design of a High-Speed Civil Transport (HSCT) air-breathing propulsion system for multimission, variable-cycle operations was successfully optimized through a soft coupling of the engine performance analyzer NASA Engine Performance Program (NEPP) to a multidisciplinary optimization tool COMETBOARDS that was developed at the NASA Lewis Research Center. The design optimization of this engine was cast as a nonlinear optimization problem, with engine thrust as the merit function and the bypass ratios, r-values of fans, fuel flow, and other factors as important active design variables. Constraints were specified on factors including the maximum speed of the compressors, the positive surge margins for the compressors with specified safety factors, the discharge temperature, the pressure ratios, and the mixer extreme Mach number. Solving the problem by using the most reliable optimization algorithm available in COMETBOARDS would provide feasible optimum results only for a portion of the aircraft flight regime because of the large number of mission points (defined by altitudes, Mach numbers, flow rates, and other factors), diverse constraint types, and overall poor conditioning of the design space. Only the cascade optimization strategy of COMETBOARDS, which was devised especially for difficult multidisciplinary applications, could successfully solve a number of engine design problems for their flight regimes. Furthermore, the cascade strategy converged to the same global optimum solution even when it was initiated from different design points. Multiple optimizers in a specified sequence, pseudorandom damping, and reduction of the design space distortion via a global scaling scheme are some of the key features of the cascade strategy. HSCT engine concept, optimized solution for HSCT engine concept. A COMETBOARDS solution for an HSCT engine (Mach-2.4 mixed-flow turbofan) along with its configuration is shown. The optimum thrust is normalized with respect to NEPP results. COMETBOARDS added value in the design optimization of the HSCT engine.
Kozica, Samantha L; Teede, Helena J; Harrison, Cheryce L; Klein, Ruth; Lombard, Catherine B
2016-01-01
The prevalence of obesity in rural and remote areas is elevated in comparison to urban populations, highlighting the need for interventions targeting obesity prevention in these settings. Implementing evidence-based obesity prevention programs is challenging. This study aimed to investigate factors influencing the implementation of obesity prevention programs, including adoption, program delivery, community uptake, and continuation, specifically within rural settings. Nested within a large-scale randomized controlled trial, a qualitative exploratory approach was adopted, with purposive sampling techniques utilized, to recruit stakeholders from 41 small rural towns in Australia. In-depth semistructured interviews were conducted with clinical health professionals, health service managers, and local government employees. Open coding was completed independently by 2 investigators and thematic analysis undertaken. In-depth interviews revealed that obesity prevention programs were valued by the rural workforce. Program implementation is influenced by interrelated factors across: (1) contextual factors and (2) organizational capacity. Key recommendations to manage the challenges of implementing evidence-based programs focused on reducing program delivery costs, aided by the provision of a suite of implementation and evaluation resources. Informing the scale-up of future prevention programs, stakeholders highlighted the need to build local rural capacity through developing supportive university partnerships, generating local program ownership and promoting active feedback to all program partners. We demonstrate that the rural workforce places a high value on obesity prevention programs. Our results inform the future scale-up of obesity prevention programs, providing an improved understanding of strategies to optimize implementation of evidence-based prevention programs. © 2015 National Rural Health Association.
Optimization of Supersonic Transport Trajectories
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Windhorst, Robert; Phillips, James
1998-01-01
This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.
Veronese, Guido; Castiglioni, Marco; Tombolani, Marco; Said, Mahmud
2012-09-01
This study aimed to explore optimism, perceived happiness and life satisfaction in a group of Palestinian children living in urban districts, rural areas and a refugee camp in the West Bank, as well as in a city in Israel. Three self-report instruments were administered to a convenience sample of school-age children (n. 226; 8-12 years old): the Youth Life Orientation Test (YLOT), the Subjective Happiness Scale (SHS) and the Face Scale (FS). The scores were analyzed using anovas and correlation tests (Pearson's r). Gender and age differences were explored. Optimism, life satisfaction and perceived happiness characterize the entire group of Palestinian children in general. Very little difference was found as a function of gender. Palestinian children seem to enjoy a satisfactory quality of life with regard to optimism, satisfaction and perceived happiness. We hypothesize that these factors may reinforce resilience and positive adjustment to trauma in children. The implications for clinical psychology are discussed. © 2011 The Authors. Scandinavian Journal of Caring Sciences © 2011 Nordic College of Caring Science.
Population-based validation of a German version of the Brief Resilience Scale
Wenzel, Mario; Stieglitz, Rolf-Dieter; Kunzler, Angela; Bagusat, Christiana; Helmreich, Isabella; Gerlicher, Anna; Kampa, Miriam; Kubiak, Thomas; Kalisch, Raffael; Lieb, Klaus; Tüscher, Oliver
2018-01-01
Smith and colleagues developed the Brief Resilience Scale (BRS) to assess the individual ability to recover from stress despite significant adversity. This study aimed to validate the German version of the BRS. We used data from a population-based (sample 1: n = 1.481) and a representative (sample 2: n = 1.128) sample of participants from the German general population (age ≥ 18) to assess reliability and validity. Confirmatory factor analyses (CFA) were conducted to compare one- and two-factorial models from previous studies with a method-factor model which especially accounts for the wording of the items. Reliability was analyzed. Convergent validity was measured by correlating BRS scores with mental health measures, coping, social support, and optimism. Reliability was good (α = .85, ω = .85 for both samples). The method-factor model showed excellent model fit (sample 1: χ2/df = 7.544; RMSEA = .07; CFI = .99; SRMR = .02; sample 2: χ2/df = 1.166; RMSEA = .01; CFI = 1.00; SRMR = .01) which was significantly better than the one-factor model (Δχ2(4) = 172.71, p < .001) or the two-factor model (Δχ2(3) = 31.16, p < .001). The BRS was positively correlated with well-being, social support, optimism, and the coping strategies active coping, positive reframing, acceptance, and humor. It was negatively correlated with somatic symptoms, anxiety and insomnia, social dysfunction, depression, and the coping strategies religion, denial, venting, substance use, and self-blame. To conclude, our results provide evidence for the reliability and validity of the German adaptation of the BRS as well as the unidimensional structure of the scale once method effects are accounted for. PMID:29438435
Skoien, Wade; Page, Katie; Parsonage, William; Ashover, Sarah; Milburn, Tanya; Cullen, Louise
2016-10-12
The translation of healthcare research into practice is typically challenging and limited in effectiveness. The Theoretical Domains Framework (TDF) identifies 12 domains of behaviour determinants which can be used to understand the principles of behavioural change, a key factor influencing implementation. The Accelerated Chest pain Risk Evaluation (ACRE) project has successfully translated research into practice, by implementing an intervention to improve the assessment of low to intermediate risk patients presenting to emergency departments (EDs) with chest pain. The aims of this paper are to describe use of the TDF to determine which factors successfully influenced implementation and to describe use of the TDF as a tool to evaluate implementation efforts and which domains are most relevant to successful implementation. A 30-item questionnaire targeting clinicians was developed using the TDF as a guide. Questions encompassed ten of the domains of the TDF: Knowledge; Skills; Social/professional role and identity; Beliefs about capabilities; Optimism; Beliefs about consequences; Intentions; Memory, attention and decision processes; Environmental context and resources; and Social influences. Sixty-three of 176 stakeholders (36 %) responded to the questionnaire. Responses for all scales showed that respondents were highly favourable to all aspects of the implementation. Scales with the highest mean responses were Intentions, Knowledge, and Optimism, suggesting that initial education and awareness strategies around the ACRE project were effective. Scales with the lowest mean responses were Environmental context and resources, and Social influences, perhaps highlighting that implementation planning could have benefitted from further consideration of the factors underlying these scales. The ACRE project was successful, and therefore, a perfect case study for understanding factors which drive implementation success. The overwhelmingly positive response suggests that it was a successful programme and likely that each of these domains was important for the implementation. However, a lack of variance in the responses hampered us from concluding which factors were most influential in driving the success of the implementation. The TDF offers a useful framework to conceptualise and evaluate factors impacting on implementation success. However, its broad scope makes it necessary to tailor the framework to allow evaluation of specific projects.
Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh
2008-07-15
This study is the first one using the Taguchi experimental design to identify the optimal operating condition for reducing polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) formations during the iron ore sintering process. Four operating parameters, including the water content (Wc; range = 6.0-7.0 wt %), suction pressure (Ps; range = 1000-1400 mmH2O), bed height (Hb; range = 500-600 mm), and type of hearth layer (including sinter, hematite, and limonite), were selected for conducting experiments in a pilot scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant We found that the resultant optimal combination (Wc = 6.5 wt%, Hb = 500 mm, Ps = 1000 mmH2O, and hearth layer = hematite) could decrease the emission factor of total PCDD/Fs (total EF(PCDD/Fs)) up to 62.8% by reference to the current operating condition of the real-scale sinter plant (Wc = 6.5 wt %, Hb = 550 mm, Ps = 1200 mmH2O, and hearth layer = sinter). Through the ANOVA analysis, we found that Wc was the most significant parameter in determining total EF(PCDD/Fs (accounting for 74.7% of the total contribution of the four selected parameters). The resultant optimal combination could also enhance slightly in both sinter productivity and sinter strength (30.3 t/m2/day and 72.4%, respectively) by reference to those obtained from the reference operating condition (29.9 t/m (2)/day and 72.2%, respectively). The above results further ensure the applicability of the obtained optimal combination for the real-scale sinter production without interfering its sinter productivity and sinter strength.
Chou, Ann F; Graber, Christopher J; Zhang, Yue; Jones, Makoto; Goetz, Matthew Bidwell; Madaras-Kelly, Karl; Samore, Matthew; Glassman, Peter A
2018-06-04
Inappropriate antibiotic use poses a serious threat to patient safety. Antimicrobial stewardship programmes (ASPs) may optimize antimicrobial use and improve patient outcomes, but their implementation remains an organizational challenge. Using the Promoting Action on Research Implementation in Health Services (PARiHS) framework, this study aimed to identify organizational factors that may facilitate ASP design, development and implementation. Among 130 Veterans Affairs facilities that offered acute care, we classified organizational variables supporting antimicrobial stewardship activities into three PARiHS domains: evidence to encompass sources of knowledge; contexts to translate evidence into practice; and facilitation to enhance the implementation process. We conducted a series of exploratory factor analyses to identify conceptually linked factor scales. Cronbach's alphas were calculated. Variables with large uniqueness values were left as single factors. We identified 32 factors, including six constructs derived from factor analyses under the three PARiHS domains. In the evidence domain, four factors described guidelines and clinical pathways. The context domain was broken into three main categories: (i) receptive context (15 factors describing resources, affiliations/networks, formalized policies/practices, decision-making, receptiveness to change); (ii) team functioning (1 factor); and (iii) evaluation/feedback (5 factors). Within facilitation, two factors described facilitator roles and tasks and five captured skills and training. We mapped survey data onto PARiHS domains to identify factors that may be adapted to facilitate ASP uptake. Our model encompasses mostly mutable factors whose relationships with performance outcomes may be explored to optimize antimicrobial use. Our framework also provides an analytical model for determining whether leveraging existing organizational processes can potentially optimize ASP performance.
Psychometric properties of Connor-Davidson Resilience Scale in a Spanish sample of entrepreneurs.
Manzano-García, Guadalupe; Ayala Calvo, Juan Carlos
2013-01-01
The literature regarding entrepreneurship suggests that the resilience of entrepreneurs may help to explain entrepreneurial success, but there is no resilience measure widely accepted by researchers. This study analyzes the psychometric properties of the Connor and Davidson Resilience Scale (CD-RISC) in a sample of Spanish entrepreneurs. A telephone survey research method was used. The participants were entrepreneurs operating in the business services sector. Interviewers telephoned a total of 900 entrepreneurs of whom 783 produced usable questionnaires. The CD-RISC was used as data collection instrument. We used principal component analysis factor and confirmatory factor analysis to determine the factor structure of the CD-RISC. Confirmatory factor analysis failed to verify the original five-factor structure of the CD-RISC, whereas principal component analysis factor yielded a 3-factor structure of resilience (hardiness, resourcefulness and optimism). In this research, 47.48% of the total variance was accounted for by three factors, and the obtained factor structure was verified through confirmatory factor analysis. The CD-RISC has been shown to be a reliable and valid tool for measuring entrepreneurs' resilience.
Elipe, Paz; Mora-Merchán, Joaquín A; Nacimiento, Lydia
2017-08-01
Cyberbullying is a phenomenon with important adverse consequences on victims. The emotional impact of this phenomenon has been well established. However, there is to date no instrument with good psychometric properties tested to assess such impact. The objective of this study was developing and testing the psychometric properties of an instrument to assess the emotional impact of cyberbullying: the "Cybervictimization Emotional Impact Scale, CVEIS." The sample included 1,016 Compulsory Secondary Education students (52.9 percent female) aged between 12 and 18 (M = 13.86, DT = 1.33) from three schools in southern Spain. The study used Confirmatory Factor Analyses to test the structure of the questionnaire and robustness of the scale. Internal consistency was also tested. The results supported the suitability of a three-factor model: active, depressed, and annoyed. This model showed an optimal adjustment, which was better than its competing models. It also demonstrated strong invariance among cybervictims and non-cybervictims and also among gender. The internal consistency of each factor, and the total scale, was also appropriate. The article concludes by discussing research and practical implications of the scale.
Grain Yield Observations Constrain Cropland CO2 Fluxes Over Europe
NASA Astrophysics Data System (ADS)
Combe, M.; de Wit, A. J. W.; Vilà-Guerau de Arellano, J.; van der Molen, M. K.; Magliulo, V.; Peters, W.
2017-12-01
Carbon exchange over croplands plays an important role in the European carbon cycle over daily to seasonal time scales. A better description of this exchange in terrestrial biosphere models—most of which currently treat crops as unmanaged grasslands—is needed to improve atmospheric CO2 simulations. In the framework we present here, we model gross European cropland CO2 fluxes with a crop growth model constrained by grain yield observations. Our approach follows a two-step procedure. In the first step, we calculate day-to-day crop carbon fluxes and pools with the WOrld FOod STudies (WOFOST) model. A scaling factor of crop growth is optimized regionally by minimizing the final grain carbon pool difference to crop yield observations from the Statistical Office of the European Union. In a second step, we re-run our WOFOST model for the full European 25 × 25 km gridded domain using the optimized scaling factors. We combine our optimized crop CO2 fluxes with a simple soil respiration model to obtain the net cropland CO2 exchange. We assess our model's ability to represent cropland CO2 exchange using 40 years of observations at seven European FluxNet sites and compare it with carbon fluxes produced by a typical terrestrial biosphere model. We conclude that our new model framework provides a more realistic and strongly observation-driven estimate of carbon exchange over European croplands. Its products will be made available to the scientific community through the ICOS Carbon Portal and serve as a new cropland component in the CarbonTracker Europe inverse model.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Development of weighting value for ecodrainage implementation assessment criteria
NASA Astrophysics Data System (ADS)
Andajani, S.; Hidayat, D. P. A.; Yuwono, B. E.
2018-01-01
This research aim to generate weighting value for each factor and find out the most influential factor for identify implementation of ecodrain concept using loading factor and Cronbach Alpha. The drainage problem especially in urban areas are getting more complex and need to be handled as soon as possible. Flood and drought problem can’t be solved by the conventional paradigm of drainage (to drain runoff flow as faster as possible to the nearest drainage area). The new paradigm of drainage that based on environmental approach called “ecodrain” can solve both of flood and drought problems. For getting the optimal result, ecodrain should be applied in smallest scale (domestic scale), until the biggest scale (city areas). It is necessary to identify drainage condition based on environmental approach. This research implement ecodrain concept by a guidelines that consist of parameters and assessment criteria. It was generating the 2 variables, 7 indicators and 63 key factors from previous research and related regulations. the conclusion of the research is the most influential indicator on technical management variable is storage system, while on non-technical management variable is government role.
Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles
2016-01-01
Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522
Synthesis of robust nonlinear autopilots using differential game theory
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1991-01-01
A synthesis technique for handling unmodeled disturbances in nonlinear control law synthesis was advanced using differential game theory. Two types of modeling inaccuracies can be included in the formulation. The first is a bias-type error, while the second is the scale-factor-type error in the control variables. The disturbances were assumed to satisfy an integral inequality constraint. Additionally, it was assumed that they act in such a way as to maximize a quadratic performance index. Expressions for optimal control and worst-case disturbance were then obtained using optimal control theory.
Scaling Laws of Microactuators and Potential Applications of Electroactive Polymers in MEMS
NASA Technical Reports Server (NTRS)
Liu, Chang; Bar-Cohen, Y.
1999-01-01
Besides the scale factor that distinguishes the various species, fundamentally biological muscles changes little between species, indicating a highly optimized system. Electroactive polymer actuators offer the closest resemblance to biological muscles, however besides the large actuation displacement these materials are falling short with regards to the actuation force. As improved materials are emerging it is becoming necessary to address key issues such as the need for effective electromechanical modeling and guiding parameters in scaling the actuators. In this paper, we will review the scaling laws for three major actuation mechanisms that are of relevance to micro electromechanical systems: electrostatic actuation, magnetic actuation, thermal bimetallic actuation, and piezoelectric actuation.
Prince, M; Acosta, D; Ferri, C P; Guerra, M; Huang, Y; Jacob, K S; Llibre Rodriguez, J J; Salas, A; Sosa, A L; Williams, J D; Hall, K S
2011-01-01
Objective Brief screening tools for dementia for use by non-specialists in primary care have yet to be validated in non-western settings where cultural factors and limited education may complicate the task. We aimed to derive a brief version of cognitive and informant scales from the Community Screening Instrument for Dementia (CSI-D) and to carry out initial assessments of their likely validity. Methods We applied Mokken analysis to CSI-D cognitive and informant scale data from 15 022 participants in representative population-based surveys in Latin America, India and China, to identify a subset of items from each that conformed optimally to item response theory scaling principles. The validity coefficients of the resulting brief scales (area under ROC curve, optimal cutpoint, sensitivity, specificity and Youden's index) were estimated from data collected in a previous cross-cultural validation of the full CSI-D. Results Seven cognitive items (Loevinger H coefficient 0.64) and six informant items (Loevinger H coefficient 0.69) were selected with excellent hierarchical scaling properties. For the brief cognitive scale, AUROC varied between 0.88 and 0.97, for the brief informant scale between 0.92 and 1.00, and for the combined algorithm between 0.94 and 1.00. Optimal cutpoints did not vary between regions. Youden's index for the combined algorithm varied between 0.78 and 1.00 by region. Conclusion A brief version of the full CSI-D appears to share the favourable culture- and education-fair screening properties of the full assessment, despite considerable abbreviation. The feasibility and validity of the brief version still needs to be established in routine primary care. Copyright © 2010 John Wiley & Sons, Ltd. PMID:21845592
Prince, M; Acosta, D; Ferri, C P; Guerra, M; Huang, Y; Jacob, K S; Llibre Rodriguez, J J; Salas, A; Sosa, A L; Williams, J D; Hall, K S
2011-09-01
Brief screening tools for dementia for use by non-specialists in primary care have yet to be validated in non-western settings where cultural factors and limited education may complicate the task. We aimed to derive a brief version of cognitive and informant scales from the Community Screening Instrument for Dementia (CSI-D) and to carry out initial assessments of their likely validity. We applied Mokken analysis to CSI-D cognitive and informant scale data from 15 022 participants in representative population-based surveys in Latin America, India and China, to identify a subset of items from each that conformed optimally to item response theory scaling principles. The validity coefficients of the resulting brief scales (area under ROC curve, optimal cutpoint, sensitivity, specificity and Youden's index) were estimated from data collected in a previous cross-cultural validation of the full CSI-D. Seven cognitive items (Loevinger H coefficient 0.64) and six informant items (Loevinger H coefficient 0.69) were selected with excellent hierarchical scaling properties. For the brief cognitive scale, AUROC varied between 0.88 and 0.97, for the brief informant scale between 0.92 and 1.00, and for the combined algorithm between 0.94 and 1.00. Optimal cutpoints did not vary between regions. Youden's index for the combined algorithm varied between 0.78 and 1.00 by region. A brief version of the full CSI-D appears to share the favourable culture- and education-fair screening properties of the full assessment, despite considerable abbreviation. The feasibility and validity of the brief version still needs to be established in routine primary care. Copyright © 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Anisimov, D. N.; Dang, Thai Son; Banerjee, Santo; Mai, The Anh
2017-07-01
In this paper, an intelligent system use fuzzy-PD controller based on relation models is developed for a two-wheeled self-balancing robot. Scaling factors of the fuzzy-PD controller are optimized by a Cross-Entropy optimization method. A linear Quadratic Regulator is designed to bring a comparison with the fuzzy-PD controller by control quality parameters. The controllers are ported and run on STM32F4 Discovery Kit based on the real-time operating system. The experimental results indicate that the proposed fuzzy-PD controller runs exactly on embedded system and has desired performance in term of fast response, good balance and stabilize.
Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2017-05-01
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2000-01-01
Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui
2016-01-01
Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365
Maximizing algebraic connectivity in air transportation networks
NASA Astrophysics Data System (ADS)
Wei, Peng
In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the weight assignment can not be studied separately for the problem with operating cost constraint. Therefore a relaxed SDP method with golden section search is developed to solve both at the same time. The cluster decomposition is utilized to solve large scale networks.
Abou Samra, Haifa; McGrath, Jacqueline M; Estes, Tracy
2013-06-01
No instrument exists that measures student perceptions of the faculty role. Such a measure is necessary to evaluate the efficacy of interventions aimed at attracting students to the faculty career path. We developed the Nurse Educator Scale (NES). The initial scale items were generated using the social cognitive career theory (SCCT) constructs and were reviewed by an expert panel to ensure content validity. Exploratory factor analysis was used. The optimized 25-item, 7-point Likert scale has a Cronbach's alpha reliability coefficient of 0.85, with a total variance of 42%. The underlying factor structure supported three defining characteristics congruent with SCCT: outcome expectations (alpha = 0.79), relevant knowledge (alpha = 0.67), and social influence (alpha = 0.80). A stand-alone, item-measuring goal setting was also supported. The NES provides a valid and reliable measure of students' intentions and motivations to pursue a future career as a nurse educator or scientist. Copyright 2013, SLACK Incorporated.
Optimizing Noble Gas-Water Interactions via Monte Carlo Simulations.
Warr, Oliver; Ballentine, Chris J; Mu, Junju; Masters, Andrew
2015-11-12
In this work we present optimized noble gas-water Lennard-Jones 6-12 pair potentials for each noble gas. Given the significantly different atomic nature of water and the noble gases, the standard Lorentz-Berthelot mixing rules produce inaccurate unlike molecular interactions between these two species. Consequently, we find simulated Henry's coefficients deviate significantly from their experimental counterparts for the investigated thermodynamic range (293-353 K at 1 and 10 atm), due to a poor unlike potential well term (εij). Where εij is too high or low, so too is the strength of the resultant noble gas-water interaction. This observed inadequacy in using the Lorentz-Berthelot mixing rules is countered in this work by scaling εij for helium, neon, argon, and krypton by factors of 0.91, 0.8, 1.1, and 1.05, respectively, to reach a much improved agreement with experimental Henry's coefficients. Due to the highly sensitive nature of the xenon εij term, coupled with the reasonable agreement of the initial values, no scaling factor is applied for this noble gas. These resulting optimized pair potentials also accurately predict partitioning within a CO2-H2O binary phase system as well as diffusion coefficients in ambient water. This further supports the quality of these interaction potentials. Consequently, they can now form a well-grounded basis for the future molecular modeling of multiphase geological systems.
Scale effects and a method for similarity evaluation in micro electrical discharge machining
NASA Astrophysics Data System (ADS)
Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua
2016-08-01
Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.
Hickman, Ronald L.; Pinto, Melissa D.; Lee, Eunsuk; Daly, Barbara J.
2015-01-01
The Decision Regret Scale (DRS) is a five-item instrument that captures an individual’s regret associated with a healthcare decision. Cross-sectional data were collected from 109 cardiac patients who decided to receive an internal cardioverter defibrillator (ICD). Exploratory and confirmatory factor analyses, assessments of the internal reliability consistency (α = .86), and discriminant validity established the DRS as a reliable and valid measure of decision regret in ICD recipients. The DRS, a psychometrically sound instrument, has relevance for clinicians and researchers vested in optimizing the decisional outcomes of ICD recipients. Future research is needed to examine the reliability and validity of the DRS in a larger and more diverse sample of ICD recipients. PMID:22679707
Pas, Elise T; Bradshaw, Catherine P
2012-10-01
Although there is an established literature supporting the efficacy of a variety of prevention programs, there has been less empirical work on the translation of such research to everyday practice or when scaled-up state-wide. There is a considerable need for more research on factors that enhance implementation of programs and optimize outcomes, particularly in school settings. The current paper examines how the implementation fidelity of an increasingly popular and widely disseminated prevention model called, School-wide Positive Behavioral Interventions and Supports (SW-PBIS), relates to student outcomes within the context of a state-wide scale-up effort. Data come from a scale-up effort of SW-PBIS in Maryland; the sample included 421 elementary and middle schools trained in SW-PBIS. SW-PBIS fidelity, as measured by one of three fidelity measures, was found to be associated with higher math achievement, higher reading achievement, and lower truancy. School contextual factors were related to implementation levels and outcomes. Implications for scale-up efforts of behavioral and mental health interventions and measurement considerations are discussed.
Núñez, D; Arias, V; Vogel, E; Gómez, L
2015-07-01
Psychotic-like experiences (PLEs) are prevalent in the general population and are associated with poor mental health and a higher risk of psychiatric disorders. The Community Assessment of Psychic Experiences-Positive (CAPE-P15) scale is a self-screening questionnaire to address subclinical positive psychotic symptoms (PPEs) in community contexts. Although its psychometric properties seem to be adequate to screen PLEs, further research is needed to evaluate certain validity aspects, particularly its internal structure and its functioning in different populations. To uncover the optimal factor structure of the CAPE-P15 scale in adolescents aged 13 to 18 years using factorial analysis methods suitable to manage categorical variables. A sample of 727 students from six secondary public schools and 245 university students completed the CAPE-P15. The dimensionality of the CAPE-P15 was tested through exploratory structural equation models (ESEMs). Based on the ESEM results, we conducted a confirmatory factor analysis (CFA) to contrast two factorial structures that potentially underlie the symptoms described by the scale: a) three correlated factors and b) a hierarchical model composed of a general PLE factor plus three specific factors (persecutory ideation, bizarre experiences, and perceptual abnormalities). The underlying structure of PLEs assessed by the CAPE-P15 is consistent with both multidimensional and hierarchical solutions. However, the latter show the best fit. Our findings reveal the existence of a strong general factor underlying scale scores. Compared with the specific factors, the general factor explains most of the common variance observed in subjects' responses. The findings suggest that the factor structure of subthreshold psychotic experiences addressed by the CAPE-P15 can be adequately represented by a general factor and three separable specific traits, supporting the hypothesis according to which there might be a common source underlying PLEs. Copyright © 2015 Elsevier B.V. All rights reserved.
Optimal Length Scale for a Turbulent Dynamo.
Sadek, Mira; Alexakis, Alexandros; Fauve, Stephan
2016-02-19
We demonstrate that there is an optimal forcing length scale for low Prandtl number dynamo flows that can significantly reduce the required energy injection rate. The investigation is based on simulations of the induction equation in a periodic box of size 2πL. The flows considered are the laminar and turbulent ABC flows forced at different forcing wave numbers k_{f}, where the turbulent case is simulated using a subgrid turbulence model. At the smallest allowed forcing wave number k_{f}=k_{min}=1/L the laminar critical magnetic Reynolds number Rm_{c}^{lam} is more than an order of magnitude smaller than the turbulent critical magnetic Reynolds number Rm_{c}^{turb} due to the hindering effect of turbulent fluctuations. We show that this hindering effect is almost suppressed when the forcing wave number k_{f} is increased above an optimum wave number k_{f}L≃4 for which Rm_{c}^{turb} is minimum. At this optimal wave number, Rm_{c}^{turb} is smaller by more than a factor of 10 than the case forced in k_{f}=1. This leads to a reduction of the energy injection rate by 3 orders of magnitude when compared to the case where the system is forced at the largest scales and thus provides a new strategy for the design of a fully turbulent experimental dynamo.
Measurement properties of the WOMAC LK 3.1 pain scale.
Stratford, P W; Kennedy, D M; Woodhouse, L J; Spadoni, G F
2007-03-01
The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) is applied extensively to patients with osteoarthritis of the hip or knee. Previous work has challenged the validity of its physical function scale however an extensive evaluation of its pain scale has not been reported. Our purpose was to estimate internal consistency, factorial validity, test-retest reliability, and the standard error of measurement (SEM) of the WOMAC LK 3.1 pain scale. Four hundred and seventy-four patients with osteoarthritis of the hip or knee awaiting arthroplasty were administered the WOMAC. Estimates of internal consistency (coefficient alpha), factorial validity (confirmatory factor analysis), and the SEM based on internal consistency (SEM(IC)) were obtained. Test-retest reliability [Type 2,1 intraclass correlation coefficients (ICC)] and a corresponding SEM(TRT) were estimated on a subsample of 36 patients. Our estimates were: internal consistency alpha=0.84; SEM(IC)=1.48; Type 2,1 ICC=0.77; SEM(TRT)=1.69. Confirmatory factor analysis failed to support a single factor structure of the pain scale with uncorrelated error terms. Two comparable models provided excellent fit: (1) a model with correlated error terms between the walking and stairs items, and between night and sit items (chi2=0.18, P=0.98); (2) a two factor model with walking and stairs items loading on one factor, night and sit items loading on a second factor, and the standing item loading on both factors (chi2=0.18, P=0.98). Our examination of the factorial structure of the WOMAC pain scale failed to support a single factor and internal consistency analysis yielded a coefficient less than optimal for individual patient use. An alternate strategy to summing the five-item responses when considering individual patient application would be to interpret item responses separately or to sum only those items which display homogeneity.
Ainuddin, Husna A; Loh, Siew Yim; Chinna, Karuthan; Low, Wah Yun; Roslani, April Camilla
2015-06-01
Adolescence is the potential period for growth and optimal functioning, but developmental issues like time of transition from childhood to adulthood will create stress and affect the adolescent's quality of life (QOL). However, there is a lack of research tool for measuring adolescent's QOL in Malaysia. The aim of the study was to determine the validity and reliability of the self-report Malay version of the pediatric QOL (PedsQL™) 4.0 Generic Core Scales in assessing the QOL of Malaysian adolescents. A cross-sectional study design using the 23-item self-report Malay version of the PedsQL 4.0 Generic Core Scales was administered on a convenient cluster sampling (n = 297 adolescent) from a secondary school. The internal consistency reliability had Cronbach's α values ranging from .70 to .89. Factor analysis reported a six-factor structure via principal axis factor analysis. In conclusion, the self-report Malay version of the pediatric QOL 4.0 Generic Core Scales is a reliable and valid tool to measure the QOL of multiethnic Malaysian adolescents. © The Author(s) 2013.
Localio, A. Russell; Platt, Alec B.; Brensinger, Colleen M.; Christie, Jason D.; Gross, Robert; Parker, Catherine S.; Price, Maureen; Metlay, Joshua P.; Cohen, Abigail; Newcomb, Craig W.; Strom, Brian L.; Kimmel, Stephen E.
2010-01-01
Background Warfarin is an anticoagulant effective in preventing stroke, but it has a narrow therapeutic range requiring optimal adherence to achieve the most favorable effects. Purpose The goal of this study was to examine specific patient factors that might help explain warfarin non-adherence at outpatient anticoagulation clinics. Method In a prospective cohort study of 156 adults, we utilized logistic regression analyses to examine the relationship between the five Treatment Prognostics scales from the Millon Behavioral Medicine Diagnostic (MBMD), as well as three additional MBMD scales (Depression, Future Pessimism, and Social Isolation), and daily warfarin non-adherence assessed using electronic medication event monitoring systems caps over a median of 139 days. Results Four of the five Treatment Prognostic scales and greater social isolation were associated with warfarin non-adherence. When controlling for pertinent demographic and medical variables, the Information Discomfort scale remained significantly associated with warfarin non-adherence over time. Conclusion Although several factors were related to warfarin non-adherence, patients reporting a lack of receptivity to details regarding their medical illness seemed most at risk for warfarin non-adherence. This information might aid in the development of interventions to enhance warfarin adherence and perhaps reduce adverse medical events. PMID:19579066
Scaled position-force tracking for wireless teleoperation of miniaturized surgical robotic system.
Guo, Jing; Liu, Chao; Poignet, Philippe
2014-01-01
Miniaturized surgical robotic system presents promising trend for reducing invasiveness during operation. However, cables used for power and communication may affect its performance. In this paper we chose Zigbee wireless communication as a means to replace communication cables for miniaturized surgical robot. Nevertheless, time delay caused by wireless communication presents a new challenge to performance and stability of the teleoperation system. We proposed a bilateral wireless teleoperation architecture taking into consideration of the effect of position-force scaling between operator and slave. Optimal position-force tracking performance is obtained and the overall system is shown to be passive with a simple condition on the scaling factors satisfied. Simulation studies verify the efficiency of the proposed scaled wireless teleoperation scheme.
NASA Astrophysics Data System (ADS)
Hidayati, N.; Widyaningsih, T. D.
2018-03-01
Chicken feet by-product of chicken industries amounted to approximately 65,894 tons/year commonly used as broths. These by-products are potentially produced into an instant form as an anti-inflammatory functional food on industrial scale. Therefore, it is necessary to optimize the critical parameters of the drying process. The aim of this study was to determine the optimum temperature and time of instant powdered chicken feet broth’s drying on pilot plant scale, to find out product’s comparison of the laboratory and pilot plant scale, and to assess financial feasibility of the business plan. The optimization of pilot plant scale’s research prepared and designed with Response Surface Methodology-Central Composite Design. The optimized factors were powdered broth’s drying temperature (55°C, 60°C, 65°C) and time (10 minutes, 11 minutes, 12 minutes) with the response observed were water and chondroitin sulphate content. The optimum condition obtained was drying process with temperature of 60.85°C for 10,05 minutes resulting in 1.90 ± 0.02% moisture content, 32.48 ± 0.28% protein content, 12.05 ± 0.80% fat content, 28.92 ± 0.09 % ash content, 24.64 ± 0.52% carbohydrate content, 1.26 ± 0.05% glucosamine content, 0.99 ± 0.23% chondroitin sulphate content, 50.87 ± 1.00% solubility, 8.59 ± 0.19% water vapour absorption, 0.37% levels of free fatty acid, 13.66 ± 4.49% peroxide number, lightness of 60.33 ± 1.24, yellowness of 3.83 ± 0.26 and redness of 21.77 ± 0.42. Financial analysis concluded that this business project was feasible to run.
Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan
2014-01-01
Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750
Toward the optimization of normalized graph Laplacian.
Xie, Bo; Wang, Meng; Tao, Dacheng
2011-04-01
Normalized graph Laplacian has been widely used in many practical machine learning algorithms, e.g., spectral clustering and semisupervised learning. However, all of them use the Euclidean distance to construct the graph Laplacian, which does not necessarily reflect the inherent distribution of the data. In this brief, we propose a method to directly optimize the normalized graph Laplacian by using pairwise constraints. The learned graph is consistent with equivalence and nonequivalence pairwise relationships, and thus it can better represent similarity between samples. Meanwhile, our approach, unlike metric learning, automatically determines the scale factor during the optimization. The learned normalized Laplacian matrix can be directly applied in spectral clustering and semisupervised learning algorithms. Comprehensive experiments demonstrate the effectiveness of the proposed approach.
Titze, Melanie I; Schaaf, Otmar; Hofmann, Marco H; Sanderson, Michael P; Zahn, Stephan K; Quant, Jens; Lehr, Thorsten
2017-03-01
BI 893923 is a novel IGF1R/INSR inhibitor with promising anti-tumor efficacy. Dose-limiting hyperglycemia has been observed for other IGF1R/INSR inhibitors in clinical trials. To counterbalance anti-tumor efficacy with the risk of hyperglycemia and to determine the therapeutic window, we aimed to develop a translational pharmacokinetic/pharmacodynamics model for BI 893923. This aimed to translate pharmacokinetics and pharmacodynamics from animals to humans by an allometrically scaled semi-mechanistic model. Model development was based on a previously published PK/PD model for BI 893923 in mice (Titze et al., Cancer Chemother Pharmacol 77:1303-1314, 13). PK and blood glucose parameters were scaled by allometric principles using body weight as a scaling factor along with an estimation of the parameter exponents. Biomarker and tumor growth parameters were extrapolated from mouse to human using the body weight ratio as scaling factor. The allometric PK/PD model successfully described BI 893923 pharmacokinetics and blood glucose across mouse, rat, dog, minipig, and monkey. BI 893923 human exposure as well as blood glucose and tumor growth were predicted and compared for different dosing scenarios. A comprehensive risk-benefit analysis was conducted by determining the net clinical benefit for each schedule. An oral dose of 2750 mg BI 893923 divided in three evenly distributed doses was identified as the optimal human dosing regimen, predicting a tumor growth inhibition of 90.4% without associated hyperglycemia. Our model supported human therapeutic dose estimation by rationalizing the optimal efficacious dosing regimen with minimal undesired effects. This modeling approach may be useful for PK/PD scaling of other IGF1R/INSR inhibitors.
Application configuration selection for energy-efficient execution on multicore systems
Wang, Shinan; Luo, Bing; Shi, Weisong; ...
2015-09-21
Balanced performance and energy consumption are incorporated in the design of modern computer systems. Several runtime factors, such as concurrency levels, thread mapping strategies, and dynamic voltage and frequency scaling (DVFS) should be considered in order to achieve optimal energy efficiency fora workload. Selecting appropriate run-time factors, however, is one of the most challenging tasks because the run-time factors are architecture-specific and workload-specific. And while most existing works concentrate on either static analysis of the workload or run-time prediction results, we present a hybrid two-step method that utilizes concurrency levels and DVFS settings to achieve the energy efficiency configuration formore » a worldoad. The experimental results based on a Xeon E5620 server with NPB and PARSEC benchmark suites show that the model is able to predict the energy efficient configuration accurately. On average, an additional 10% EDP (Energy Delay Product) saving is obtained by using run-time DVFS for the entire system. An off-line optimal solution is used to compare with the proposed scheme. Finally, the experimental results show that the average extra EDP saved by the optimal solution is within 5% on selective parallel benchmarks.« less
Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Windhorst, Robert; Phillips, James
1998-01-01
This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.
Yang, Min; Sun, Peide; Wang, Ruyi; Han, Jingyi; Wang, Jianqiao; Song, Yingqi; Cai, Jing; Tang, Xiudi
2013-09-01
An optimal operating condition for ammonia removal at low temperature, based on fully coupled activated sludge model (FCASM), was determined in a full-scale oxidation ditch process wastewater treatment plant (WWTP). The FCASM-based mechanisms model was calibrated and validated with the data measured on site. Several important kinetic parameters of the modified model were tested through respirometry experiment. Validated model was used to evaluate the relationship between ammonia removal and operating parameters, such as temperature (T), dissolved oxygen (DO), solid retention time (SRT) and hydraulic retention time of oxidation ditch (HRT). The simulated results showed that low temperature have a negative effect on the ammonia removal. Through orthogonal simulation tests of the last three factors and combination with the analysis of variance, the optimal operating mode acquired of DO, SRT, HRT for the WWTP at low temperature were 3.5 mg L(-1), 15 d and 14 h, respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.
Innovative model-based flow rate optimization for vanadium redox flow batteries
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2016-11-01
In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.
Tanning Addiction: Conceptualisation, Assessment, and Correlates.
Andreassen, C S; Pallesen, S; Torsheim, T; Demetrovics, Z; Griffiths, M D
2018-02-25
Research into problematic tanning (or 'tanning addiction') has markedly increased over the past few years. Although several excessive tanning instruments exist, most of these are psychometrically poor, not theoretically anchored, and have mainly been used on small samples. Against this background, a new tanning addiction scale was developed based on a specific theoretical approach utilising core addiction criteria. A scale comprising seven items (i.e. salience/craving, mood modification, tolerance, withdrawal, conflict, relapse/loss of control, and problems) was administered online to a cross-sectional convenience sample of 23,537 adults (M age =35.8 years, SD=13.3), together with an assessment of demographic factors, the five-factor model of personality, and symptoms of obsessive-compulsive disorder, anxiety and depression. A confirmatory factor analysis showed that a one-factor model showed an optimal fit with the data collected (RMSEA=.050 [90% CI=.047-.053], CFI=.99, TLI=.99). High factor loadings (.781-.905, all p<.001) and coefficient omega indicator of reliability (ω=.941 [95% CI=.939-.944]) were also found using the new scale. In a multiple linear regression analysis, tanning addiction was positively associated with being female, not being in a relationship, extroversion, neuroticism, anxiety and obsessive-compulsiveness. It was also found that educational level, intellect/openness and depression were inversely associated with tanning addiction. The new scale, Bergen Tanning Addiction Scale (BTAS), showed good psychometric properties, and is the first scale to fully conceptualise tanning addiciton within a contemporary addiction framework. Given this, the BTAS may potentially assist future clinical practice in providing appropriate patient care, prevention and disease management. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Dornburg, Alex; Su, Zhuo; Townsend, Jeffrey P
2018-06-25
With the rise of genome- scale datasets there has been a call for increased data scrutiny and careful selection of loci appropriate for attempting the resolution of a phylogenetic problem. Such loci are desired to maximize phylogenetic information content while minimizing the risk of homoplasy. Theory posits the existence of characters that evolve under such an optimum rate, and efforts to determine optimal rates of inference have been a cornerstone of phylogenetic experimental design for over two decades. However, both theoretical and empirical investigations of optimal rates have varied dramatically in their conclusions: spanning no relationship to a tight relationship between the rate of change and phylogenetic utility. Here we synthesize these apparently contradictory views, demonstrating both empirical and theoretical conditions under which each is correct. We find that optimal rates of characters-not genes-are generally robust to most experimental design decisions. Moreover, consideration of site rate heterogeneity within a given locus is critical to accurate predictions of utility. Factors such as taxon sampling or the targeted number of characters providing support for a topology are additionally critical to the predictions of phylogenetic utility based on the rate of character change. Further, optimality of rates and predictions of phylogenetic utility are not equivalent, demonstrating the need for further development of comprehensive theory of phylogenetic experimental design.
Islam, R S; Tisi, D; Levy, M S; Lye, G J
2007-01-01
A major bottleneck in drug discovery is the production of soluble human recombinant protein in sufficient quantities for analysis. This problem is compounded by the complex relationship between protein yield and the large number of variables which affect it. Here, we describe a generic framework for the rapid identification and optimization of factors affecting soluble protein yield in microwell plate fermentations as a prelude to the predictive and reliable scaleup of optimized culture conditions. Recombinant expression of firefly luciferase in Escherichia coli was used as a model system. Two rounds of statistical design of experiments (DoE) were employed to first screen (D-optimal design) and then optimize (central composite face design) the yield of soluble protein. Biological variables from the initial screening experiments included medium type and growth and induction conditions. To provide insight into the impact of the engineering environment on cell growth and expression, plate geometry, shaking speed, and liquid fill volume were included as factors since these strongly influence oxygen transfer into the wells. Compared to standard reference conditions, both the screening and optimization designs gave up to 3-fold increases in the soluble protein yield, i.e., a 9-fold increase overall. In general the highest protein yields were obtained when cells were induced at a relatively low biomass concentration and then allowed to grow slowly up to a high final biomass concentration, >8 g.L-1. Consideration and analysis of the model results showed 6 of the original 10 variables to be important at the screening stage and 3 after optimization. The latter included the microwell plate shaking speeds pre- and postinduction, indicating the importance of oxygen transfer into the microwells and identifying this as a critical parameter for subsequent scale translation studies. The optimization process, also known as response surface methodology (RSM), predicted there to be a distinct optimum set of conditions for protein expression which could be verified experimentally. This work provides a generic approach to protein expression optimization in which both biological and engineering variables are investigated from the initial screening stage. The application of DoE reduces the total number of experiments needed to be performed, while experimentation at the microwell scale increases experimental throughput and reduces cost.
Rodrigue, James R; Guenther, Robert; Kaplan, Bruce; Mandelbrot, Didier A; Pavlakis, Martha; Howard, Richard J
2008-05-15
We report on the initial development and validation of the Living Donation Expectancies Questionnaire (LDEQ), designed to measure the expectations of living kidney donor candidates. Potential living donors (n=443) at two transplant centers were administered the LDEQ and other questionnaires, and their medical records were reviewed. Factor analysis provides support for six LDEQ scales: Interpersonal Benefit, Personal Growth, Spiritual Growth, Quid Pro Quo, Health Consequences, and Miscellaneous Consequences. All but one scale showed good internal consistency. Expected benefits of donation were associated with higher optimism and lower mental health; expected consequences of donation were associated with lower optimism and lower physical and mental health. More potential donors with relative or absolute contraindications had high Interpersonal Benefit (P<0.0001), Personal Growth (P<0.01), Quid Pro Quo (P<0.0001), and Health Consequences (P<0.0001) expectations. The LDEQ has promise in evaluating donor candidates' expectations.
Design-of-experiments to Reduce Life-cycle Costs in Combat Aircraft Inlets
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Baust, Henry D.; Agrell, Johan
2003-01-01
It is the purpose of this study to demonstrate the viability and economy of Design- of-Experiments (DOE), to arrive at micro-secondary flow control installation designs that achieve optimal inlet performance for different mission strategies. These statistical design concepts were used to investigate the properties of "low unit strength" micro-effector installation. "Low unit strength" micro-effectors are micro-vanes, set a very low angle-of incidence, with very long chord lengths. They are designed to influence the neat wall inlet flow over an extended streamwise distance. In this study, however, the long chord lengths were replicated by a series of short chord length effectors arranged in series over multiple bands of effectors. In order to properly evaluate the performance differences between the single band extended chord length installation designs and the segmented multiband short chord length designs, both sets of installations must be optimal. Critical to achieving optimal micro-secondary flow control installation designs is the understanding of the factor interactions that occur between the multiple bands of micro-scale vane effectors. These factor interactions are best understood and brought together in an optimal manner through a structured DOE process, or more specifically Response Surface Methods (RSM).
Chou, Kee-Lee
2009-04-01
Although it is a well-known fact that migration is a risk factor contributing to psychopathology, little is known about how pre-migration factors may lead to depression among migrants. The present study examined the relationship between poorly planned migration and depressive symptoms, and evaluated the moderating roles of optimism, sense of control, and social support in the relationship between pre-migration planning and depression among new immigrants from Mainland China to Hong Kong. A representative sample of 449 migrants aged 18 and above were interviewed in 2007 using a face-to-face format. The 20-item Center for Epidemiological Studies of Depression (CES-D) scale was used to measure depressive symptoms, and a series of questions regarding socio-demographic characteristics (age, gender, marital status, education, and household income), optimism, sense of control, and social support were also included. A total of 26.5% of our sample scored 16 or above on the CES-D scale, which indicated a clinically significant case of depression. Poor migration planning was significantly related to CES-D scores after adjusting for all socio-demographic variables and three psycho-social factors. In addition, optimism, sense of control, and social support were also significantly related to the CES-D score. It was also found that social support reduced the harmful impact of poor migration planning on depressive symptoms. New immigrants to Hong Kong from Mainland China are at risk for depressive symptoms, especially those who are not well prepared for migration; therefore, prevention measures, particularly strengthening their social support in Hong Kong, should be considered seriously by policy makers.
ERIC Educational Resources Information Center
Geldhof, John; Little, Todd D.; Hawley, Patricia H.
2012-01-01
In this paper we present domain-specific measures of academic and social self-regulation in young adults. We base our scales on Baltes and colleagues' Selection, Optimization, and Compensation (SOC) model, and establish the factor structure of our new measures using data collected from a sample of 152 college students. We then compare the…
ERIC Educational Resources Information Center
Mi, Fangqiong
2010-01-01
A growing number of residency programs are instituting curricula to include the component of evidence-based medicine (EBM) principles and process. However, these curricula may not be able to achieve the optimal learning outcomes, perhaps because various contextual factors are often overlooked when EBM training is being designed, developed, and…
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
Cone beam CT dose reduction in prostate radiotherapy using Likert scale methods
Newton, Louise A; Jordan, Suzanne; Smith, Ruth
2016-01-01
Objective: To use a Likert scale method to optimize image quality (IQ) for cone beam CT (CBCT) soft-tissue matching for image-guided radiotherapy of the prostate. Methods: 23 males with local/locally advanced prostate cancer had the CBCT IQ assessed using a 4-point Likert scale (4 = excellent, no artefacts; 3 = good, few artefacts; 2 = poor, just able to match; 1 = unsatisfactory, not able to match) at three levels of exposure. The lateral separations of the subjects were also measured. The Friedman test and Wilcoxon signed-rank tests were used to determine if the IQ was associated with the exposure level. We used the point-biserial correlation and a χ2 test to investigate the relationship between the separation and IQ. Results: The Friedman test showed that the IQ was related to exposure (p = 2 × 10−7) and the Wilcoxon signed-rank test demonstrated that the IQ decreased as exposure decreased (all p-values <0.005). We did not find a correlation between the IQ and the separation (correlation coefficient 0.045), but for separations <35 cm, it was possible to use the lowest exposure parameters studied. Conclusion: We can reduce exposure factors to 80% of those supplied with the system without hindering the matching process for all patients. For patients with lateral separations <35 cm, the exposure factors can be reduced further to 64% of the original values. Advances in knowledge: Likert scales are a useful tool for measuring IQ in the optimization of CBCT IQ for soft-tissue matching in radiotherapy image guidance applications. PMID:26689092
Cone beam CT dose reduction in prostate radiotherapy using Likert scale methods.
Langmack, Keith A; Newton, Louise A; Jordan, Suzanne; Smith, Ruth
2016-01-01
To use a Likert scale method to optimize image quality (IQ) for cone beam CT (CBCT) soft-tissue matching for image-guided radiotherapy of the prostate. 23 males with local/locally advanced prostate cancer had the CBCT IQ assessed using a 4-point Likert scale (4 = excellent, no artefacts; 3 = good, few artefacts; 2 = poor, just able to match; 1 = unsatisfactory, not able to match) at three levels of exposure. The lateral separations of the subjects were also measured. The Friedman test and Wilcoxon signed-rank tests were used to determine if the IQ was associated with the exposure level. We used the point-biserial correlation and a χ(2) test to investigate the relationship between the separation and IQ. The Friedman test showed that the IQ was related to exposure (p = 2 × 10(-7)) and the Wilcoxon signed-rank test demonstrated that the IQ decreased as exposure decreased (all p-values <0.005). We did not find a correlation between the IQ and the separation (correlation coefficient 0.045), but for separations <35 cm, it was possible to use the lowest exposure parameters studied. We can reduce exposure factors to 80% of those supplied with the system without hindering the matching process for all patients. For patients with lateral separations <35 cm, the exposure factors can be reduced further to 64% of the original values. Likert scales are a useful tool for measuring IQ in the optimization of CBCT IQ for soft-tissue matching in radiotherapy image guidance applications.
Sun, Qi-Xing; Chen, Xu-Sheng; Ren, Xi-Dong; Mao, Zhong-Gui
2015-01-01
Nissin, natamycin, and ε-poly-L-lysine (ε-PL) are three safe, microbial-produced food preservatives used today in the food industry. However, current industrial production of ε-PL is only performed in several countries. In order to realize large-scale ε-PL production by fermentation, the effects of seed stage on cell growth and ε-PL production were investigated by monitoring of pH in situ in a 5-L laboratory-scale fermenter. A significant increase in ε-PL production in fed-batch fermentation by Streptomyces sp. M-Z18 was achieved, at 48.9 g/L, through the optimization of several factors associated with seed stage, including spore pretreatment, inoculum age, and inoculum level. Compared with conventional fermentation approaches using 24-h-old shake-flask seed broth as inoculum, the maximum ε-PL concentration and productivity were enhanced by 32.3 and 36.6 %, respectively. The effect of optimized inoculum conditions on ε-PL production on a large scale was evaluated using a 50-L pilot-scale fermenter, attaining a maximum ε-PL production of 36.22 g/L in fed-batch fermentation, constituting the first report of ε-PL production at pilot scale. These results will be helpful for efficient ε-PL production by Streptomyces at pilot and plant scales.
NASA Astrophysics Data System (ADS)
Chiu, Y.; Nishikawa, T.
2013-12-01
With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.
NASA Astrophysics Data System (ADS)
Fyta, Maria; Netz, Roland R.
2012-03-01
Using molecular dynamics (MD) simulations in conjunction with the SPC/E water model, we optimize ionic force-field parameters for seven different halide and alkali ions, considering a total of eight ion-pairs. Our strategy is based on simultaneous optimizing single-ion and ion-pair properties, i.e., we first fix ion-water parameters based on single-ion solvation free energies, and in a second step determine the cation-anion interaction parameters (traditionally given by mixing or combination rules) based on the Kirkwood-Buff theory without modification of the ion-water interaction parameters. In doing so, we have introduced scaling factors for the cation-anion Lennard-Jones (LJ) interaction that quantify deviations from the standard mixing rules. For the rather size-symmetric salt solutions involving bromide and chloride ions, the standard mixing rules work fine. On the other hand, for the iodide and fluoride solutions, corresponding to the largest and smallest anion considered in this work, a rescaling of the mixing rules was necessary. For iodide, the experimental activities suggest more tightly bound ion pairing than given by the standard mixing rules, which is achieved in simulations by reducing the scaling factor of the cation-anion LJ energy. For fluoride, the situation is different and the simulations show too large attraction between fluoride and cations when compared with experimental data. For NaF, the situation can be rectified by increasing the cation-anion LJ energy. For KF, it proves necessary to increase the effective cation-anion Lennard-Jones diameter. The optimization strategy outlined in this work can be easily adapted to different kinds of ions.
The Efficacy of Self-Report Measures in Predicting Social Phobia in African American Adults.
Chapman, L Kevin; Petrie, Jenny M; Richards, Allyn
2015-03-01
Empirical literature pertaining to anxiety in African Americans has been relatively sparse. More recent studies indicate that the construct of social fear is different in African Americans than in non-Hispanic Whites. Although some of these studies have examined factor structure utilizing self-report measures of anxiety in African American samples, none to date have examined the clinical utility of these measures in predicting anxiety diagnoses, particularly social phobia. A total of sixty-five African American adults from the community completed the Fear Survey Schedule-Second Edition (FSS-II), Social Anxiety Interaction Scale (SIAS), Social Phobia Scale (SPS), and Albany Panic and Phobia Questionnaire (APPQ). The Anxiety Disorder Interview Schedule-Fourth Edition (ADIS-IV) was administered to all participants to specify differential diagnoses of anxiety and related disorders. Twenty-three African American adults were diagnosed with social phobia leaving 42 diagnostic controls. Results suggest that the social anxiety factors were highly predictive of a social phobia diagnosis (AUC=.84 to .90; CI .73-.98, p<.01) and sensitivity and specificity rates revealed optimal cutoff scores for each measure. The optimal cutoff scores reveal the clinical utility of the social fear factor from these measures in screening for social phobia in African Americans. Future direction and implications are discussed. Psychinfo, PubMed, Medline. © 2015 National Medical Association. Published by Elsevier Inc. All rights reserved.
Song, Yong-Hong; Sun, Xue-Wen; Jiang, Bo; Liu, Ji-En; Su, Xian-Hui
2015-12-01
Design of experiment (DoE) is a statistics-based technique for experimental design that could overcome the shortcomings of traditional one-factor-at-a-time (OFAT) approach for protein purification optimization. In this study, a DoE approach was applied for optimizing purification of a recombinant single-chain variable fragment (scFv) against type 1 insulin-like growth factor receptor (IGF-1R) expressed in Escherichia coli. In first capture step using Capto L, a 2-level fractional factorial analysis and successively a central composite circumscribed (CCC) design were used to identify the optimal elution conditions. Two main effects, pH and trehalose, were identified, and high recovery (above 95%) and low aggregates ratio (below 10%) were achieved at the pH range from 2.9 to 3.0 with 32-35% (w/v) trehalose added. In the second step using cation exchange chromatography, an initial screening of media and elution pH and a following CCC design were performed, whereby the optimal selectivity of the scFv was obtained on Capto S at pH near 6.0, and the optimal conditions for fulfilling high DBC and purity were identified as pH range of 5.9-6.1 and loading conductivity range of 5-12.5 mS/cm. Upon a further gel filtration, the final purified scFv with a purity of 98% was obtained. Finally, the optimized conditions were verified by a 20-fold scale-up experiment. The purities and yields of intermediate and final products all fell within the regions predicted by DoE approach, suggesting the robustness of the optimized conditions. We proposed that the DoE approach described here is also applicable in production of other recombinant antibody constructs. Copyright © 2015 Elsevier Inc. All rights reserved.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
Small-angle scattering from 3D Sierpinski tetrahedron generated using chaos game
NASA Astrophysics Data System (ADS)
Slyamov, Azat
2017-12-01
We approximate a three dimensional version of deterministic Sierpinski gasket (SG), also known as Sierpinski tetrahedron (ST), by using the chaos game representation (CGR). Structural properties of the fractal, generated by both deterministic and CGR algorithms are determined using small-angle scattering (SAS) technique. We calculate the corresponding monodisperse structure factor of ST, using an optimized Debye formula. We show that scattering from CGR of ST recovers basic fractal properties, such as fractal dimension, iteration number, scaling factor, overall size of the system and the number of units composing the fractal.
Contador, Israel; Fernández-Calvo, Bernardino; Palenzuela, David L; Campos, Francisco Ramos; Rivera-Navarro, Jesús; de Lucena, Virginia Menezes
2015-11-01
We examined whether grounded optimism and external locus of control are associated with admission to dementia day care centers (DCCs). A total of 130 informal caregivers were recruited from the Alzheimer's Association in Salamanca (northwest Spain). All caregivers completed an assessment protocol that included the Battery of Generalized Expectancies of Control Scales (BEEGC-20, acronym in Spanish) as well as depression and burden measures. The decision of the care setting at baseline assessment (own home vs DCC) was considered the main outcome measure in the logistic regression analyses. Grounded optimism was a preventive factor for admission (odds ratio [OR]: 0.34 and confidence interval [CI]: 0.15-0.75), whereas external locus of control (OR: 2.75, CI: 1.25-6.03) increased the probabilities of using DCCs. Depression mediated the relationship between optimism and DCCs, but this effect was not consistent for burden. Grounded optimism promotes the extension of care at home for patients with dementia. © The Author(s) 2013.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walz-Flannigan, A; Lucas, J; Buchanan, K
Purpose: Manual technique selection in radiography is needed for imaging situations where there is difficulty in proper positioning for AEC, prosthesis, for non-bucky imaging, or for guiding image repeats. Basic information about how to provide consistent image signal and contrast for various kV and tissue thickness is needed to create manual technique charts, and relevant for physicists involved in technique chart optimization. Guidance on technique combinations and rules-of-thumb to provide consistent image signal still in use today are based on measurements with optical density of screen-film combinations and older generation x-ray systems. Tools such as a kV-scale chart can bemore » useful to know how to modify mAs when kV is changed in order to maintain consistent image receptor signal level. We evaluate these tools for modern equipment for use in optimizing proper size scaled techniques. Methods: We used a water phantom to measure calibrated signal change for CR and DR (with grid) for various beam energies. Tube current values were calculated that would yield a consistent image signal response. Data was fit to provide sufficient granularity of detail to compose technique-scale chart. Tissue thickness approximated equivalence to 80% of water depth. Results: We created updated technique-scale charts, providing mAs and kV combinations to achieve consistent signal for CR and DR for various tissue equivalent thicknesses. We show how this information can be used to create properly scaled size-based manual technique charts. Conclusion: Relative scaling of mAs and kV for constant signal (i.e. the shape of the curve) appears substantially similar between film-screen and CR/DR. This supports the notion that image receptor related differences are minor factors for relative (not absolute) changes in mAs with varying kV. However, as demonstrated creation of these difficult to find detailed technique-scales are useful tools for manual chart optimization.« less
Wei, Zhen-hua; Duan, Ying-yi; Qian, Yong-qing; Guo, Xiao-feng; Li, Yan-jun; Jin, Shi-he; Zhou, Zhong-Xin; Shan, Sheng-yan; Wang, Chun-ru; Chen, Xue-Jiao; Zheng, Yuguo; Zhong, Jian-Jiang
2014-09-01
Polysaccharides and ganoderic acids (GAs) are the major bioactive constituents of Ganoderma species. However, the commercialization of their production was limited by low yield in the submerged culture of Ganoderma despite improvement made in recent years. In this work, twelve Ganoderma strains were screened to efficiently produce polysaccharides and GAs, and Ganoderma lucidum 5.26 (GL 5.26) that had been never reported in fermentation process was found to be most efficient among the tested stains. Then, the fermentation medium was optimized for GL 5.26 by statistical method. Firstly, glucose and yeast extract were found to be the optimum carbon source and nitrogen source according to the single-factor tests. Ferric sulfate was found to have significant effect on GL 5.26 biomass production according to the results of Plackett-Burman design. The concentrations of glucose, yeast extract and ferric sulfate were further optimized by response surface methodology. The optimum medium composition was 55 g/L of glucose, 14 g/L of yeast extract, 0.3 g/L of ferric acid, with other medium components unchanged. The optimized medium was testified in the 10-L bioreactor, and the production of biomass, IPS, total GAs and GA-T enhanced by 85, 27, 49 and 93 %, respectively, compared to the initial medium. The fermentation process was scaled up to 300-L bioreactor; it showed good IPS (3.6 g/L) and GAs (670 mg/L) production. The biomass was 23.9 g/L in 300-L bioreactor, which was the highest biomass production in pilot scale. According to this study, the strain GL 5.26 showed good fermentation property by optimizing the medium. It might be a candidate industrial strain by further process optimization and scale-up study.
Optimization of coupled device based on optical fiber with crystalline and integrated resonators
NASA Astrophysics Data System (ADS)
Bassir, David; Salzenstein, Patrice; Zhang, Mingjun
2017-05-01
Because of the advantages in terms of reproducibility for optical resonators on chip which are designed of various topologies and integration with optical devices. To increase the Q-factor from the lower rang [104 - 106 ] to higher one [108 -1010] [1-4] one use crystalline resonators. It is much complicated to couple an optical signal from a tapered fiber to crystalline resonator than from a defined ridge to a resonator designed on a chip. In this work, we will focus on the optimization of the crystalline resonators under straight wave guide (based on COMSOL multi-physic software) [5- 7] and subject also to technological constraints of manufacturing. The coupling problem at the Nano scale makes our optimizations problem more dynamics in term of design space.
Examining the Association Between Implementation and Outcomes
Pas, Elise T.; Bradshaw, Catherine P.
2012-01-01
Although there is an established literature supporting the efficacy of a variety of prevention programs, there has been less empirical work on the tran of such research to everyday practice or when scaled-up state-wide. There is a considerable need for more research on factors that enhance implementation of programs and optimize outcomes, particularly in school settings. The current paper examines how the implementation fidelity of an increasingly popular and widely disseminated prevention model called, School-wide Positive Behavioral Interventions and Supports (SW-PBIS), relates to student outcomes within the context of a state-wide scale-up effort. Data come from a scale-up effort of SW-PBIS in Maryland; the sample included 421 elementary and middle schools trained in SW-PBIS. SW-PBIS fidelity, as measured by one of three fidelity measures, was found to be associated with higher math achievement, higher reading achievement, and lower truancy. School contextual factors were related to implementation levels and outcomes. Implications for scale-up efforts of behavioral and mental health interventions and measurement considerations are discussed. PMID:22836758
Lin, Chao-Yuan; Fu, Kuei-Lin; Lin, Cheng-Yu
2016-11-01
Recent extreme rainfall events led to many landslides due to climate changes in Taiwan. How to effectively promote post-disaster treatment and/or management works in a watershed/drainage basin is a crucial issue. Regarding the processes of watershed treatment and/or management works, disaster hotspot scanning and treatment priority setup should be carried out in advance. A scanning method using landslide ratio to determine the appropriate outlet of an interested watershed, and an optimal subdivision system with better homogeneity and accuracy in landslide ratio estimation were developed to help efficient executions of treatment and/or management works. Topography is a key factor affecting watershed landslide ratio. Considering the complexity and uncertainty of the natural phenomenon, multivariate analysis was applied to understand the relationship between topographic factors and landslide ratio in the interested watershed. The concept of species-area curve, which is usually adopted at on-site vegetation investigation to determinate the suitable quadrate size, was used to derive the optimal threshold in subdivisions. Results show that three main component axes including factors of scale, network and shape extracted from Digital Terrain Model coupled with areas of landslide can effectively explain the characteristics of landslide ratio in the interested watershed, and a relation curve obtained from the accuracy of landslide ratio classification and number of subdivisions could be established to derive optimal subdivision of the watershed. The subdivision method promoted in this study could be further used for priority rank and benefit assessment of landslide treatment in a watershed.
NASA Astrophysics Data System (ADS)
Lin, Chao-Yuan; Fu, Kuei-Lin; Lin, Cheng-Yu
2016-11-01
Recent extreme rainfall events led to many landslides due to climate changes in Taiwan. How to effectively promote post-disaster treatment and/or management works in a watershed/drainage basin is a crucial issue. Regarding the processes of watershed treatment and/or management works, disaster hotspot scanning and treatment priority setup should be carried out in advance. A scanning method using landslide ratio to determine the appropriate outlet of an interested watershed, and an optimal subdivision system with better homogeneity and accuracy in landslide ratio estimation were developed to help efficient executions of treatment and/or management works. Topography is a key factor affecting watershed landslide ratio. Considering the complexity and uncertainty of the natural phenomenon, multivariate analysis was applied to understand the relationship between topographic factors and landslide ratio in the interested watershed. The concept of species-area curve, which is usually adopted at on-site vegetation investigation to determinate the suitable quadrate size, was used to derive the optimal threshold in subdivisions. Results show that three main component axes including factors of scale, network and shape extracted from Digital Terrain Model coupled with areas of landslide can effectively explain the characteristics of landslide ratio in the interested watershed, and a relation curve obtained from the accuracy of landslide ratio classification and number of subdivisions could be established to derive optimal subdivision of the watershed. The subdivision method promoted in this study could be further used for priority rank and benefit assessment of landslide treatment in a watershed.
Chiu, Yueh-Hsiu Mathilda; Sheffield, Perry E; Hsu, Hsiao-Hsien Leon; Goldstein, Jonathan; Curtin, Paul C; Wright, Rosalind J
2017-12-01
The ten-item Edinburgh Postnatal Depression Scale (EPDS) is one of the most widely used self-report measures of postpartum depression. Although originally described as a one-dimensional measure, the recognition that depressive symptoms may be differentially experienced across cultural and racial/ethnic groups has led to studies examining structural equivalence of the EPDS in different populations. Variation of the factor structure remains understudied across racial/ethnic groups of US women. We examined the factor structure of the EPDS assessed 6 months postpartum in 515 women (29% black, 53% Hispanic, 18% white) enrolled in an urban Boston longitudinal birth cohort. Exploratory factor analysis (EFA) identified that a three-factor model, including depression, anxiety, and anhedonia subscales, was the most optimal fit in our sample as a whole and across race/ethnicity. Confirmatory factor analysis (CFA) was used to examine the fit of both the two- and three-factor models reported in prior research. CFA confirmed the best fit for a three-factor model, with minimal differences across race/ethnicity. "Things get on top of me" loaded on the anxiety factor among Hispanics, but loaded on the depression factor in whites and African Americans. These findings suggest that EPDS factor structure may need to be adjusted for diverse samples and warrants further study.
A quality assessment of 3D video analysis for full scale rockfall experiments
NASA Astrophysics Data System (ADS)
Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.
2012-04-01
Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.
Photonic crystal lasers using wavelength-scale embedded active region
NASA Astrophysics Data System (ADS)
Matsuo, Shinji; Sato, Tomonari; Takeda, Koji; Shinya, Akihiko; Nozaki, Kengo; Kuramochi, Eiichi; Taniyama, Hideaki; Notomi, Masaya; Fujii, Takuro; Hasebe, Koichi; Kakitsuka, Takaaki
2014-01-01
Lasers with ultra-low operating energy are desired for use in chip-to-chip and on-chip optical interconnects. If we are to reduce the operating energy, we must reduce the active volume. Therefore, a photonic crystal (PhC) laser with a wavelength-scale cavity has attracted a lot of attention because a PhC provides a large Q-factor with a small volume. To improve this device's performance, we employ an embedded active region structure in which the wavelength-scale active region is buried with an InP PhC slab. This structure enables us to achieve effective confinement of both carriers and photons, and to improve the thermal resistance of the device. Thus, we have obtained a large external differential quantum efficiency of 55% and an output power of -10 dBm by optical pumping. For electrical pumping, we use a lateral p-i-n structure that employs Zn diffusion and Si ion implantation for p-type and n-type doping, respectively. We have achieved room-temperature continuous-wave operation with a threshold current of 7.8 µA and a maximum 3 dB bandwidth of 16.2 GHz. The results of an experimental bit error rate measurement with a 10 Gbit s-1 NRZ signal reveal the minimum operating energy for transferring a single bit of 5.5 fJ. These results show the potential of this laser to be used for very short reach interconnects. We also describe the optimal design of cavity quality (Q) factor in terms of achieving a large output power with a low operating energy using a calculation based on rate equations. When we assume an internal absorption loss of 20 cm-1, the optimized coupling Q-factor is 2000.
Fast Decentralized Averaging via Multi-scale Gossip
NASA Astrophysics Data System (ADS)
Tsianos, Konstantinos I.; Rabbat, Michael G.
We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.
Application of GA-SVM method with parameter optimization for landslide development prediction
NASA Astrophysics Data System (ADS)
Li, X. Z.; Kong, J. M.
2013-10-01
Prediction of landslide development process is always a hot issue in landslide research. So far, many methods for landslide displacement series prediction have been proposed. Support vector machine (SVM) has been proved to be a novel algorithm with good performance. However, the performance strongly depends on the right selection of the parameters (C and γ) of SVM model. In this study, we presented an application of GA-SVM method with parameter optimization in landslide displacement rate prediction. We selected a typical large-scale landslide in some hydro - electrical engineering area of Southwest China as a case. On the basis of analyzing the basic characteristics and monitoring data of the landslide, a single-factor GA-SVM model and a multi-factor GA-SVM model of the landslide were built. Moreover, the models were compared with single-factor and multi-factor SVM models of the landslide. The results show that, the four models have high prediction accuracies, but the accuracies of GA-SVM models are slightly higher than those of SVM models and the accuracies of multi-factor models are slightly higher than those of single-factor models for the landslide prediction. The accuracy of the multi-factor GA-SVM models is the highest, with the smallest RSME of 0.0009 and the biggest RI of 0.9992.
Xu, Hui Qiu; Huang, Yin Hua; Wu, Zhi Feng; Cheng, Jiong; Li, Cheng
2016-10-01
Based on 641 agricultural top soil samples (0-20 cm) and land use map in 2005 of Guangzhou, we used single-factor pollution indices and Pearson/Spearman correlation and partial redundancy analyses and quantified the soil contamination with As and Cd and their relationships with landscape heterogeneity at three grid scales of 2 km×2 km, 5 km×5 km, and 10 km×10 km as well as the determinant landscape heterogeneity factors at a certain grid scale. 5.3% and 7.2% of soil samples were contaminated with As and Cd, respectively. At the three scales, the agricultural soil As and Cd contamination were generally significantly correlated with parent materials' composition, river/road density and landscape patterns of several land use types, indicating the parent materials, sewage irrigation and human activities (e.g., industrial and traffic activities, and the additions of pesticides and fertilizers) were possibly the main input pathways of trace metals. Three subsets of landscape heterogeneity variables (i.e., parent materials, distance-density variables, and landscape patterns) could explain 12.7%-42.9% of the variation of soil contamination with As and Cd, of which the explanatory power increased with the grid scale and the determinant factors varied with scales. Parent materials had higher contribution to the variations of soil contamination at the 2 and 10 km grid scales, while the contributions of landscape patterns and distance-density variables generally increased with the grid scale. Adjusting the distribution of cropland and optimizing the landscape pattern of land use types are important ways to reduce soil contamination at local scales, which urban planners and decision makers should pay more attention to.
Performance of Grey Wolf Optimizer on large scale problems
NASA Astrophysics Data System (ADS)
Gupta, Shubham; Deep, Kusum
2017-01-01
For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.
Assessment of Personality as a Requirement for Next Generation Ship Optimal Manning
2012-09-01
Department of Test and Evaluation FFG Frigate Guided Missile FFM Five Factor Model FY HRO Fiscal Year High Reliability Organization HSI Human... FFM ) to classify personality and their associated scales provided a renewed foundation for personality trait research (Digman, 1990). Costa and...McCrae’s (1992) FFM of personality traits (openness, conscientiousness, extraversion, agreeableness, and emotional stability) has developed into the
Zamorski, Mark A.; Colman, Ian
2018-01-01
The psychometric properties of the ten-item Kessler Psychological Distress scale (K10) have been extensively explored in civilian populations. However, documentation of its psychometric properties in military populations is limited, and there is no universally accepted cut-off score on the K10 to distinguish clinical vs. sub-clinical levels of distress. The objective of this study was to examine the psychometric properties of the K10 in Canadian Armed Forces personnel. Data on 6700 Regular Forces personnel were obtained from the 2013 Canadian Forces Mental Health Survey. The internal consistency and factor structure of the K10 (range, 0–40) were examined using confirmatory factor analysis (CFA). Receiver Operating Characteristic (ROC) analysis was used to select optimal cut-offs for the K10, using the presence/absence of any of four past-month disorders as the outcome (posttraumatic stress disorder, major depressive episode, generalized anxiety disorder, and panic disorder). Cronbach’s alpha (0.88) indicated a high level of internal consistency of the K10. Results from CFA indicated that a single-factor 10-item construct had an acceptable overall fit: root mean square error of approximation (RMSEA) = 0.05; 90% confidence interval (CI):0.05–0.06, comparative fit index (CFI) = 0.99, Tucker-Lewis Index (TLI) = 0.99, weighted root mean square residual (WRMR) = 2.06. K10 scores were strongly associated with both the presence and recency of all four measured disorders. The area under the ROC curve was 0.92, demonstrating excellent predictive value for past-30-day disorders. A K10 score of 10 or greater was optimal for screening purposes (sensitivity = 86%; specificity = 83%), while a score of 17 or greater (sensitivity = 53%; specificity = 97%) was optimal for prevalence estimation of clinically significant psychological distress, in that it resulted in equal numbers of false positives and false negatives. Our results suggest that K10 scale has satisfactory psychometric properties for use as a measure of non-specific psychological distress in the military population. PMID:29698459
NASA Astrophysics Data System (ADS)
Xie, Yan; Li, Mu; Zhou, Jin; Zheng, Chang-zheng
2009-07-01
Agricultural machinery total power is an important index to reflex and evaluate the level of agricultural mechanization. It is the power source of agricultural production, and is the main factors to enhance the comprehensive agricultural production capacity expand production scale and increase the income of the farmers. Its demand is affected by natural, economic, technological and social and other "grey" factors. Therefore, grey system theory can be used to analyze the development of agricultural machinery total power. A method based on genetic algorithm optimizing grey modeling process is introduced in this paper. This method makes full use of the advantages of the grey prediction model and characteristics of genetic algorithm to find global optimization. So the prediction model is more accurate. According to data from a province, the GM (1, 1) model for predicting agricultural machinery total power was given based on the grey system theories and genetic algorithm. The result indicates that the model can be used as agricultural machinery total power an effective tool for prediction.
Scenarios for optimizing potato productivity in a lunar CELSS
NASA Technical Reports Server (NTRS)
Wheeler, R. M.; Morrow, R. C.; Tibbitts, T. W.; Bula, R. J.
1992-01-01
The use of controlled ecological life support system (CELSS) in the development and growth of large-scale bases on the Moon will reduce the expense of supplying life support materials from Earth. Such systems would use plants to produce food and oxygen, remove carbon dioxide, and recycle water and minerals. In a lunar CELSS, several factors are likely to be limiting to plant productivity, including the availability of growing area, electrical power, and lamp/ballast weight for lighting systems. Several management scenarios are outlined in this discussion for the production of potatoes based on their response to irradiance, photoperiod, and carbon dioxide concentration. Management scenarios that use 12-hr photoperiods, high carbon dioxide concentrations, and movable lamp banks to alternately irradiate halves of the growing area appear to be the most efficient in terms of growing area, electrical power, and lamp weights. However, the optimal scenario will be dependent upon the relative 'costs' of each factor.
Seyed Moosavi, Seyed Mohsen; Moaveni, Bijan; Moshiri, Behzad; Arvan, Mohammad Reza
2018-02-27
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.
Seyed Moosavi, Seyed Mohsen; Moshiri, Behzad; Arvan, Mohammad Reza
2018-01-01
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors. PMID:29495434
Gaudin, Daniel; Krafcik, Brianna M; Mansour, Tarek R; Alnemari, Ahmed
2017-02-01
Despite widespread use of lumbar spinal fusion as a treatment for back pain, outcomes remain variable. Optimizing patient selection can help to reduce adverse outcomes. This literature review was conducted to better understand factors associated with optimal postoperative results after lumbar spinal fusion for chronic back pain and current tools used for evaluation. The PubMed database was searched for clinical trials related to psychosocial determinants of outcome after lumbar spinal fusion surgery; evaluation of commonly used patient subjective outcome measures; and perioperative cognitive, behavioral, and educational therapies. Reference lists of included studies were also searched by hand for additional studies meeting inclusion and exclusion criteria. Patients' perception of good health before surgery and low cardiovascular comorbidity predict improved postoperative physical functional capacity and greater patient satisfaction. Depression, tobacco use, and litigation predict poorer outcomes after lumbar fusion. Incorporation of cognitive-behavioral therapy perioperatively can address these psychosocial risk factors and improve outcomes. The 36-Item Short Form Health Survey, European Quality of Life five dimensions questionnaire, visual analog pain scale, brief pain inventory, and Oswestry Disability Index can provide specific feedback to track patient progress and are important to understand when evaluating the current literature. This review summarizes current information and explains commonly used assessment tools to guide clinicians in decision making when caring for patients with lower back pain. When determining a treatment algorithm, physicians must consider predictive psychosocial factors. Use of perioperative cognitive-behavioral therapy and patient education can improve outcomes after lumbar spinal fusion. Copyright © 2016 Elsevier Inc. All rights reserved.
Semihard processes with BLM renormalization scale setting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caporale, Francesco; Ivanov, Dmitry Yu.; Murdaca, Beatrice
We apply the BLM scale setting procedure directly to amplitudes (cross sections) of several semihard processes. It is shown that, due to the presence of β{sub 0}-terms in the NLA results for the impact factors, the obtained optimal renormalization scale is not universal, but depends both on the energy and on the process in question. We illustrate this general conclusion considering the following semihard processes: (i) inclusive production of two forward high-p{sub T} jets separated by large interval in rapidity (Mueller-Navelet jets); (ii) high-energy behavior of the total cross section for highly virtual photons; (iii) forward amplitude of the productionmore » of two light vector mesons in the collision of two virtual photons.« less
NASA Astrophysics Data System (ADS)
Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg
2017-04-01
In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain regions, since it considers the effect of topography on radiation and water fluxes and integrates a snow module. A new automatic sensitivity and optimization tool based on the Particle Swarm Optimization theory has been developed, available as R package on https://github.com/EURAC-Ecohydro/geotopOptim2. The model, once calibrated for soil and vegetation parameters, predicts the plot-scale temporal SMC dynamics of SMC and ET with a RMSE of about 0.05 m3/m3 and 40 W/m2, respectively. However, the model tends to underestimate ET during summer months over apple orchards. Results show how most sensitive parameters are both soil and canopy structural properties. However, ranking is affected by the choice of the target function and local topographic conditions. In particular, local slope/aspect influences results in stations located over hillslopes, but with marked seasonal differences. Results for locations in the valley floor are strongly controlled by the choice of the bottom water flux boundary condition. The poorer model performances in simulating ET over apple orchards could be explained by a model structural deficiency in representing the stomatal control on vapor pressure deficit for this particular type of vegetation. The results of this sensitivity could be extended to other physically distributed models, and also provide valuable insights for optimizing new experimental designs.
Scale dependence of open c{\\bar{c}} and b{\\bar{b}} production in the low x region
NASA Astrophysics Data System (ADS)
Oliveira, E. G. de; Martin, A. D.; Ryskin, M. G.
2017-03-01
The `optimal' factorization scale μ _0 is calculated for open heavy quark production. We find that the optimal value is μ _F=μ _0˜eq 0.85√{p^2_T+m_Q^2} ; a choice which allows us to resum the double-logarithmic, (α _s ln μ ^2_F ln (1/x))^n corrections (enhanced at LHC energies by large values of ln (1/x)) and to move them into the incoming parton distributions, PDF(x,μ _0^2). Besides this result for the single inclusive cross section (corresponding to an observed heavy quark of transverse momentum p_T), we also determined the scale for processes where the acoplanarity can be measured; that is, events where the azimuthal angle between the quark and the antiquark may be determined experimentally. Moreover, we discuss the important role played by the 2→ 2 subprocesses, gg→ Q\\bar{Q} at NLO and higher orders. In summary, we achieve a better stability of the QCD calculations, so that the data on c{\\bar{c}} and b{\\bar{b}} production can be used to further constrain the gluons in the small x, relatively low scale, domain, where the uncertainties of the global analyses are large at present.
Testing optimal foraging theory in a penguin-krill system.
Watanabe, Yuuki Y; Ito, Motohiro; Takahashi, Akinori
2014-03-22
Food is heterogeneously distributed in nature, and understanding how animals search for and exploit food patches is a fundamental challenge in ecology. The classic marginal value theorem (MVT) formulates optimal patch residence time in response to patch quality. The MVT was generally proved in controlled animal experiments; however, owing to the technical difficulties in recording foraging behaviour in the wild, it has been inadequately examined in natural predator-prey systems, especially those in the three-dimensional marine environment. Using animal-borne accelerometers and video cameras, we collected a rare dataset in which the behaviour of a marine predator (penguin) was recorded simultaneously with the capture timings of mobile, patchily distributed prey (krill). We provide qualitative support for the MVT by showing that (i) krill capture rate diminished with time in each dive, as assumed in the MVT, and (ii) dive duration (or patch residence time, controlled for dive depth) increased with short-term, dive-scale krill capture rate, but decreased with long-term, bout-scale krill capture rate, as predicted from the MVT. Our results demonstrate that a single environmental factor (i.e. patch quality) can have opposite effects on animal behaviour depending on the time scale, emphasizing the importance of multi-scale approaches in understanding complex foraging strategies.
Sumiyoshi, Chika; Fujino, Haruo; Sumiyoshi, Tomiki; Yasuda, Yuka; Yamamori, Hidenaga; Ohi, Kazutaka; Fujimoto, Michiko; Takeda, Masatoshi; Hashimoto, Ryota
2016-11-30
The Wechsler Adult Intelligence Scale (WAIS) has been widely used to assess intellectual functioning not only in healthy adults but also people with psychiatric disorders. The purpose of the study was to develop an optimal WAIS-3 short form (SF) to evaluate intellectual status in patients with schizophrenia. One hundred and fifty patients with schizophrenia and 221 healthy controls entered the study. To select subtests for SFs, following criteria were considered: 1) predictability for the full IQ (FIQ), 2) representativeness for the IQ structure, 3) consistency of subtests across versions, 4) sensitivity to functional outcome measures, 5) conciseness in administration time. First, exploratory factor analysis (EFA) and multiple regression analysis were conducted to select subtests satisfying the first and the second criteria. Then, candidate SFs were nominated based on the third criterion and the coverage of verbal IQ and performance IQ. Finally, the optimality of candidate SFs was evaluated in terms of the fourth and fifth criteria. The results suggest that the dyad of Similarities and Symbol Search was the most optimal satisfying the above criteria. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
We extend the analysis of optimal scale in pollution permit markets by allowing for both market power and private information. The effect of these considerations on optimal scale is determined by analyzing pollution of nitrogen from Waste Water Treatment Plants (WWTP) into North Carolina’s Neuse Riv...
Genetic algorithms - What fitness scaling is optimal?
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac
1993-01-01
A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.
Goertz, Yvonne H H; Houkes, Inge; Nijhuis, Frans J N; Bosma, Hans
2017-01-01
Worldwide, the employment rate of people with visual impairments (PVIs) is lower than that of the general working-age population. To improve the employment rate of this group, there is a need for knowledge about differences in modifiable factors between working and non-working PVIs. To identify modifiable factors associated with participation on the competitive labour market of PVIs. Based on the findings, we aim to develop an individual assessment instrument for determining the odds of labour market success of PVIs. Data were collected among 299 PVIs by means of a cross-sectional telephone survey based on existing (validated) and self-developed scales and items. Logistic regression analysis was used to find the strongest predictors of the dichotomous outcome of 'having paid work on the competitive labour market' (yes/no). We found three personal non-modifiable factors (level of education, comorbidity, level of visual impairment) and three modifiable factors (mobility, acceptance and optimism) to be significantly (p < 0.05) associated with having paid work. The factors of optimism, acceptance and mobility should be included in an individual assessment instrument which can provide PVIs and their job coaches with good starting points for improving the labour market situation of the PVIs.
Capacity-optimized mp2 audio watermarking
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Dittmann, Jana
2003-06-01
Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.
Design, Optimization and Application of Small Molecule Biosensor in Metabolic Engineering.
Liu, Yang; Liu, Ye; Wang, Meng
2017-01-01
The development of synthetic biology and metabolic engineering has painted a great future for the bio-based economy, including fuels, chemicals, and drugs produced from renewable feedstocks. With the rapid advance of genome-scale modeling, pathway assembling and genome engineering/editing, our ability to design and generate microbial cell factories with various phenotype becomes almost limitless. However, our lack of ability to measure and exert precise control over metabolite concentration related phenotypes becomes a bottleneck in metabolic engineering. Genetically encoded small molecule biosensors, which provide the means to couple metabolite concentration to measurable or actionable outputs, are highly promising solutions to the bottleneck. Here we review recent advances in the design, optimization and application of small molecule biosensor in metabolic engineering, with particular focus on optimization strategies for transcription factor (TF) based biosensors.
Conservation law for self-paced movements.
Huh, Dongsung; Sejnowski, Terrence J
2016-08-02
Optimal control models of biological movements introduce external task factors to specify the pace of movements. Here, we present the dual to the principle of optimality based on a conserved quantity, called "drive," that represents the influence of internal motivation level on movement pace. Optimal control and drive conservation provide equivalent descriptions for the regularities observed within individual movements. For regularities across movements, drive conservation predicts a previously unidentified scaling law between the overall size and speed of various self-paced hand movements in the absence of any external tasks, which we confirmed with psychophysical experiments. Drive can be interpreted as a high-level control variable that sets the overall pace of movements and may be represented in the brain as the tonic levels of neuromodulators that control the level of internal motivation, thus providing insights into how internal states affect biological motor control.
Design, Optimization and Application of Small Molecule Biosensor in Metabolic Engineering
Liu, Yang; Liu, Ye; Wang, Meng
2017-01-01
The development of synthetic biology and metabolic engineering has painted a great future for the bio-based economy, including fuels, chemicals, and drugs produced from renewable feedstocks. With the rapid advance of genome-scale modeling, pathway assembling and genome engineering/editing, our ability to design and generate microbial cell factories with various phenotype becomes almost limitless. However, our lack of ability to measure and exert precise control over metabolite concentration related phenotypes becomes a bottleneck in metabolic engineering. Genetically encoded small molecule biosensors, which provide the means to couple metabolite concentration to measurable or actionable outputs, are highly promising solutions to the bottleneck. Here we review recent advances in the design, optimization and application of small molecule biosensor in metabolic engineering, with particular focus on optimization strategies for transcription factor (TF) based biosensors. PMID:29089935
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types.
Muir, B R; Rogers, D W O
2014-11-01
To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers' effective point of measurement (EPOM) and beam quality conversion factors. The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R50 converted from I50 (calculated using ion chamber simulations in phantom) to R50 calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, kQ, as a function of R50. The optimal shift of cylindrical chambers is found to be less than the 0.5 rcav recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 rcav. Values of kecal are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R50 = 7.5 cm (kQ (')) are provided. These factors avoid the use of gradient correction factors as used in the TG-51 protocol although a chamber dependent optimal shift in the EPOM is required when using plane-parallel chambers while no shift is needed with cylindrical chambers. The sensitivity of these results to parameters used to model the ion chambers is discussed and the uncertainty related to the practical use of these results is evaluated. These results will prove useful as electron beam reference dosimetry protocols are being updated. The analysis of this work indicates that cylindrical ion chambers may be appropriate for use in low-energy electron beams but measurements are required to characterize their use in these beams.
Wu, Jun-Zheng; Liu, Qin; Geng, Xiao-Shan; Li, Kai-Mian; Luo, Li-Juan; Liu, Jin-Ping
2017-03-14
Cassava (Manihot esculenta Crantz) is a major crop extensively cultivated in the tropics as both an important source of calories and a promising source for biofuel production. Although stable gene expression have been used for transgenic breeding and gene function study, a quick, easy and large-scale transformation platform has been in urgent need for gene functional characterization, especially after the cassava full genome was sequenced. Fully expanded leaves from in vitro plantlets of Manihot esculenta were used to optimize the concentrations of cellulase R-10 and macerozyme R-10 for obtaining protoplasts with the highest yield and viability. Then, the optimum conditions (PEG4000 concentration and transfection time) were determined for cassava protoplast transient gene expression. In addition, the reliability of the established protocol was confirmed for subcellular protein localization. In this work we optimized the main influencing factors and developed an efficient mesophyll protoplast isolation and PEG-mediated transient gene expression in cassava. The suitable enzyme digestion system was established with the combination of 1.6% cellulase R-10 and 0.8% macerozyme R-10 for 16 h of digestion in the dark at 25 °C, resulting in the high yield (4.4 × 10 7 protoplasts/g FW) and vitality (92.6%) of mesophyll protoplasts. The maximum transfection efficiency (70.8%) was obtained with the incubation of the protoplasts/vector DNA mixture with 25% PEG4000 for 10 min. We validated the applicability of the system for studying the subcellular localization of MeSTP7 (an H + /monosaccharide cotransporter) with our transient expression protocol and a heterologous Arabidopsis transient gene expression system. We optimized the main influencing factors and developed an efficient mesophyll protoplast isolation and transient gene expression in cassava, which will facilitate large-scale characterization of genes and pathways in cassava.
Oishi, Sana; Kimura, Shin-Ichiro; Noguchi, Shuji; Kondo, Mio; Kondo, Yosuke; Shimokawa, Yoshiyuki; Iwao, Yasunori; Itai, Shigeru
2018-01-15
A new scale-down methodology from commercial rotary die scale to laboratory scale was developed to optimize a plant-derived soft gel capsule formulation and eventually manufacture superior soft gel capsules on a commercial scale, in order to reduce the time and cost for formulation development. Animal-derived and plant-derived soft gel film sheets were prepared using an applicator on a laboratory scale and their physicochemical properties, such as tensile strength, Young's modulus, and adhesive strength, were evaluated. The tensile strength of the animal-derived and plant-derived soft gel film sheets was 11.7 MPa and 4.41 MPa, respectively. The Young's modulus of the animal-derived and plant-derived soft gel film sheets was 169 MPa and 17.8 MPa, respectively, and both sheets showed a similar adhesion strength of approximately 4.5-10 MPa. Using a D-optimal mixture design, plant-derived soft gel film sheets were prepared and optimized by varying their composition, including variations in the mass of κ-carrageenan, ι-carrageenan, oxidized starch and heat-treated starch. The physicochemical properties of the sheets were evaluated to determine the optimal formulation. Finally, plant-derived soft gel capsules were manufactured using the rotary die method and the prepared soft gel capsules showed equivalent or superior physical properties compared with pre-existing soft gel capsules. Therefore, we successfully developed a new scale-down methodology to optimize the formulation of plant-derived soft gel capsules on a commercial scale. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Wen-Hua; Tsai, Chia-Chin; Lin, Chih-Feng; Tsai, Pei-Yuan; Hwang, Wen-Song
2013-01-01
A continuous acid-catalyzed steam explosion pretreatment process and system to produce cellulosic ethanol was developed at the pilot-scale. The effects of the following parameters on the pretreatment efficiency of rice straw feedstocks were investigated: the acid concentration, the reaction temperature, the residence time, the feedstock size, the explosion pressure and the screw speed. The optimal presteaming horizontal reactor conditions for the pretreatment process are as follows: 1.7 rpm and 100-110 °C with an acid concentration of 1.3% (w/w). An acid-catalyzed steam explosion is then performed in the vertical reactor at 185 °C for 2 min. Approximately 73% of the total saccharification yield was obtained after the rice straw was pretreated under optimal conditions and subsequent enzymatic hydrolysis at a combined severity factor of 0.4-0.7. Moreover, good long-term stability and durability of the pretreatment system under continuous operation was observed. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lassonde, Sylvain; Boucher, Olivier; Breon, François-Marie; Tobin, Isabelle; Vautard, Robert
2016-04-01
The share of renewable energies in the mix of electricity production is increasing worldwide. This trend is driven by environmental and economic policies aiming at a reduction of greenhouse gas emissions and an improvement of energy security. It is expected to continue in the forthcoming years and decades. Electricity production from renewables is related to weather and climate factors such as the diurnal and seasonal cycles of sunlight and wind, but is also linked to variability on all time scales. The intermittency in the renewable electricity production (solar, wind power) could eventually hinder their future deployment. Intermittency is indeed a challenge as demand and supply of electricity need to be balanced at any time. This challenge can be addressed by the deployment of an overcapacity in power generation (from renewable and/or thermal sources), a large-scale energy storage system and/or improved management of the demand. The main goal of this study is to optimize a hypothetical renewable energy system at the French and European scales in order to investigate if spatial diversity of the production (here electricity from wind energy) could be a response to the intermittency. We use ECMWF (European Centre for Medium-Range Weather Forecasts) ERA-interim meteorological reanalysis and meteorological fields from the Weather Research and Forecasts (WRF) model to estimate the potential for wind power generation. Electricity demand and production are provided by the French electricity network (RTE) at the scale of administrative regions for years 2013 and 2014. Firstly we will show how the simulated production of wind power compares against the measured production at the national and regional scale. Several modelling and bias correction methods of wind power production will be discussed. Secondly, we will present results from an optimization procedure that aims to minimize some measure of the intermittency of wind energy. For instance we estimate the optimal distribution between French regions (with or without cross-border inputs) that minimizes the impact of low-production periods computed in a running mean sense and its sensitivity to the period considered. We will also assess which meteorological situations are the most problematic over the 35-year ERA-interim climatology(1980-2015).
The effect of exercise and childbirth classes on fear of childbirth and locus of labor pain control.
Guszkowska, Monika
2014-01-01
This study sought to track changes in intensity of fear of childbirth and locus of labor pain control in women attending an exercise program for pregnant women or traditional childbirth classes and to identify the predictors of these changes. The study was longitudinal/non-experimental in nature and run on 109 healthy primigravidae aged from 22 to 37, including 62 women participating in an exercise program for pregnant women and 47 women attending traditional childbirth classes. The following assessment tools were used: two scales developed by the present authors - the Fear of Childbirth Scale and the Control of Birth Pain Scale, three standardized psychological inventories for the big five personality traits (NEO Five Factors Inventory), trait anxiety (State-Trait Anxiety Inventory) and dispositional optimism (Life Oriented Test-Revised) and a questionnaire concerning socioeconomic status, health status, activities during pregnancy, relations with partners and expectations about childbirth. Fear of childbirth significantly decreased in women participating in the exercise program for pregnant women but not in women attending traditional childbirth classes. Several significant predictors of post-intervention fear of childbirth emerged: dispositional optimism and self-rated health (negative) and strength of the belief that childbirth pain depends on chance (positive).
Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.
Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette
2011-10-01
To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.
Mi, Misa; Moseley, James L; Green, Michael L
2012-02-01
Many residency programs offer training in evidence-based medicine (EBM). However, these curricula often fail to achieve optimal learning outcomes, perhaps because they neglect various contextual factors in the learning environment. We developed and validated an instrument to characterize the environment for EBM learning and practice in residency programs. An EBM Environment Scale was developed following scale development principles. A survey was administered to residents across six programs in primary care specialties at four medical centers. Internal consistency reliability was analyzed with Cronbach's coefficient alpha. Validity was assessed by comparing predetermined subscales with the survey's internal structure as assessed via factor analysis. Scores were also compared for subgroups based on residency program affiliation and residency characteristics. Out of 262 eligible residents, 124 completed the survey (response rate 47%). The overall mean score was 3.89 (standard deviation=0.56). The initial reliability analysis of the 48-item scale had a high reliability coefficient (Cronbach α=.94). Factor analysis and further item analysis resulted in a shorter 36-item scale with a satisfactory reliability coefficient (Cronbach α=.86). Scores were higher for residents with prior EBM training in medical school (4.14 versus 3.62) and in residency (4.25 versus 3.69). If further testing confirms its properties, the EBM Environment Scale may be used to understand the influence of the learning environment on the effectiveness of EBM training. Additionally, it may detect changes in the EBM learning environment in response to programmatic or institutional interventions.
Lui, P Priscilla; Fernando, Gaithri A
2018-02-01
Numerous scales currently exist that assess well-being, but research on measures of well-being is still advancing. Conceptualization and measurement of subjective well-being have emphasized intrapsychic over psychosocial domains of optimal functioning, and disparate research on hedonic, eudaimonic, and psychological well-being lacks a unifying theoretical model. Lack of systematic investigations on the impact of culture on subjective well-being has also limited advancement of this field. The goals of this investigation were to (1) develop and validate a self-report measure, the Well-Being Scale (WeBS), that simultaneously assesses overall well-being and physical, financial, social, hedonic, and eudaimonic domains of this construct; (2) evaluate factor structures that underlie subjective well-being; and (3) examine the measure's psychometric properties. Three empirical studies were conducted to develop and validate the 29-item scale. The WeBS demonstrated an adequate five-factor structure in an exploratory structural equation model in Study 1. Confirmatory factor analyses showed that a bifactor structure best fit the WeBS data in Study 2 and Study 3. Overall WeBS scores and five domain-specific subscale scores demonstrated adequate to excellent internal consistency reliability and construct validity. Mean differences in overall well-being and its five subdomains are presented for different ethnic groups. The WeBS is a reliable and valid measure of multiple aspects of well-being that are considered important to different ethnocultural groups.
Constrained growth flips the direction of optimal phenological responses among annual plants.
Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas
2016-03-01
Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Piccolomini, Angelica A; Fiabon, Alex; Borrotti, Matteo; De Lucrezia, Davide
2017-01-01
We optimized the heterologous expression of trans-isoprenyl diphosphate synthase (IDS), the key enzyme involved in the biosynthesis of trans-polyisoprene. trans-Polyisoprene is a particularly valuable compound due to its superior stiffness, excellent insulation, and low thermal expansion coefficient. Currently, trans-polyisoprene is mainly produced through chemical synthesis and no biotechnological processes have been established so far for its large-scale production. In this work, we employed D-optimal design and response surface methodology to optimize the expression of thermophilic enzymes IDS from Thermococcus kodakaraensis. The design of experiment took into account of six factors (preinduction cell density, inducer concentration, postinduction temperature, salt concentration, alternative carbon source, and protein inhibitor) and seven culture media (LB, NZCYM, TB, M9, Ec, Ac, and EDAVIS) at five different pH points. By screening only 109 experimental points, we were able to improve IDS production by 48% in close-batch fermentation. © 2015 International Union of Biochemistry and Molecular Biology, Inc.
[Application of microwave technology in extraction process of Guizhi Fuling capsule].
Wang, Zheng-kuan; Zhou, Mao; Liu, Yuan; Bi, Yu-an; Wang, Zhen-zhong; Xiao, Wei
2015-06-01
In this paper, optimization of the conditions of microwave technique in extraction process of Guizhi Fuling capsule in the condition of a pilot scale was carried out. First of all, through the single factor experiment investigation of various factors, the overall impact tendency and range of each factor were determined. Secondly, L9 (3(4)) orthogonal test optimization was used, and the contents of gallic acid in liquid, paeoniflorin, benzoic acid, cinnamic acid, benzoyl paeoniflorin, amygdalin of the liquid medicine were detected. The extraction rate and comprehensive evaluation were calculated with the extraction effect, as the judgment basis. Theoptimum extraction process of Guizhi Fuling capsule by microwave technology was as follows: the ratio of liquid to solid was 6: 1 added to drinking water, the microwave power was 6 kW, extraction time was 20 min for 3 times. The process of the three batch of amplification through verification, the results are stable, and compared with conventional water extraction has the advantages of energy saving, time saving, high efficiency advantages. The above results show the optimum extracting technology of high efficiency, stable and feasible.
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-01-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-09-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.
2014-01-01
Background Scale-up to industrial production level of a fermentation process occurs after optimization at small scale, a critical transition for successful technology transfer and commercialization of a product of interest. At the large scale a number of important bioprocess engineering problems arise that should be taken into account to match the values obtained at the small scale and achieve the highest productivity and quality possible. However, the changes of the host strain’s physiological and metabolic behavior in response to the scale transition are still not clear. Results Heterogeneity in substrate and oxygen distribution is an inherent factor at industrial scale (10,000 L) which affects the success of process up-scaling. To counteract these detrimental effects, changes in dissolved oxygen and pressure set points and addition of diluents were applied to 10,000 L scale to enable a successful process scale-up. A comprehensive semi-quantitative and time-dependent analysis of the exometabolome was performed to understand the impact of the scale-up on the metabolic/physiological behavior of the host microorganism. Intermediates from central carbon catabolism and mevalonate/ergosterol synthesis pathways were found to accumulate in both the 10 L and 10,000 L scale cultures in a time-dependent manner. Moreover, excreted metabolites analysis revealed that hypoxic conditions prevailed at the 10,000 L scale. The specific product yield increased at the 10,000 L scale, in spite of metabolic stress and catabolic-anabolic uncoupling unveiled by the decrease in biomass yield on consumed oxygen. Conclusions An optimized S. cerevisiae fermentation process was successfully scaled-up to an industrial scale bioreactor. The oxygen uptake rate (OUR) and overall growth profiles were matched between scales. The major remaining differences between scales were wet cell weight and culture apparent viscosity. The metabolic and physiological behavior of the host microorganism at the 10,000 L scale was investigated with exometabolomics, indicating that reduced oxygen availability affected oxidative phosphorylation cascading into down- and up-stream pathways producing overflow metabolism. Our study revealed striking metabolic and physiological changes in response to hypoxia exerted by industrial bioprocess up-scaling. PMID:24593159
Developing an African youth psychosocial assessment: an application of item response theory.
Betancourt, Theresa S; Yang, Frances; Bolton, Paul; Normand, Sharon-Lise
2014-06-01
This study aimed to refine a dimensional scale for measuring psychosocial adjustment in African youth using item response theory (IRT). A 60-item scale derived from qualitative data was administered to 667 war-affected adolescents (55% female). Exploratory factor analysis (EFA) determined the dimensionality of items based on goodness-of-fit indices. Items with loadings less than 0.4 were dropped. Confirmatory factor analysis (CFA) was used to confirm the scale's dimensionality found under the EFA. Item discrimination and difficulty were estimated using a graded response model for each subscale using weighted least squares means and variances. Predictive validity was examined through correlations between IRT scores (θ) for each subscale and ratings of functional impairment. All models were assessed using goodness-of-fit and comparative fit indices. Fisher's Information curves examined item precision at different underlying ranges of each trait. Original scale items were optimized and reconfigured into an empirically-robust 41-item scale, the African Youth Psychosocial Assessment (AYPA). Refined subscales assess internalizing and externalizing problems, prosocial attitudes/behaviors and somatic complaints without medical cause. The AYPA is a refined dimensional assessment of emotional and behavioral problems in African youth with good psychometric properties. Validation studies in other cultures are recommended. Copyright © 2014 John Wiley & Sons, Ltd.
Developing an African youth psychosocial assessment: an application of item response theory
BETANCOURT, THERESA S.; YANG, FRANCES; BOLTON, PAUL; NORMAND, SHARON-LISE
2014-01-01
This study aimed to refine a dimensional scale for measuring psychosocial adjustment in African youth using item response theory (IRT). A 60-item scale derived from qualitative data was administered to 667 war-affected adolescents (55% female). Exploratory factor analysis (EFA) determined the dimensionality of items based on goodness-of-fit indices. Items with loadings less than 0.4 were dropped. Confirmatory factor analysis (CFA) was used to confirm the scale's dimensionality found under the EFA. Item discrimination and difficulty were estimated using a graded response model for each subscale using weighted least squares means and variances. Predictive validity was examined through correlations between IRT scores (θ) for each subscale and ratings of functional impairment. All models were assessed using goodness-of-fit and comparative fit indices. Fisher's Information curves examined item precision at different underlying ranges of each trait. Original scale items were optimized and reconfigured into an empirically-robust 41-item scale, the African Youth Psychosocial Assessment (AYPA). Refined subscales assess internalizing and externalizing problems, prosocial attitudes/behaviors and somatic complaints without medical cause. The AYPA is a refined dimensional assessment of emotional and behavioral problems in African youth with good psychometric properties. Validation studies in other cultures are recommended. PMID:24478113
Mc Gee, Shauna L; Höltge, Jan; Maercker, Andreas; Thoma, Myriam V
2017-08-11
The present study evaluated the revised Sense of Coherence (SOC-R) scale in a sample of older adults, using an extended range of psychological concepts. It further examined the psychometric properties of the revised scale and tested the theoretical assumptions underpinning the SOC-R concept. The SOC-R scale was evaluated in 268 Swiss older adults (mean age = 66.9 years), including n = 15 heavily traumatized former indentured child labourers. Standardised questionnaires collected information on positive and negative life experiences, resources, current health, and well-being. Results: Confirmatory Factor Analysis indicated good model fit for a second-order three-factor model of SOC-R with the factors manageability, balance, and reflection. Satisfactory convergent and discriminant correlations were shown with related psychological concepts, including neuroticism (r = -.32, p < .01), optimism (r = .31, p < .01), and general self-efficacy (r = .49, p < .01). SOC-R was not observed to differ by age group. Moderation analyses indicated that SOC-R moderated the relationship between certain early-life adversities and mental health. The study provides support for the psychometric properties and theoretical assumptions of SOC-R and suggests that SOC-R is a valid and reliable measure suitable for use with older adults. Future studies should employ longitudinal designs to examine the stability of SOC-R.
Coral mass spawning predicted by rapid seasonal rise in ocean temperature
Maynard, Jeffrey A.; Edwards, Alasdair J.; Guest, James R.; Rahbek, Carsten
2016-01-01
Coral spawning times have been linked to multiple environmental factors; however, to what extent these factors act as generalized cues across multiple species and large spatial scales is unknown. We used a unique dataset of coral spawning from 34 reefs in the Indian and Pacific Oceans to test if month of spawning and peak spawning month in assemblages of Acropora spp. can be predicted by sea surface temperature (SST), photosynthetically available radiation, wind speed, current speed, rainfall or sunset time. Contrary to the classic view that high mean SST initiates coral spawning, we found rapid increases in SST to be the best predictor in both cases (month of spawning: R2 = 0.73, peak: R2 = 0.62). Our findings suggest that a rapid increase in SST provides the dominant proximate cue for coral mass spawning over large geographical scales. We hypothesize that coral spawning is ultimately timed to ensure optimal fertilization success. PMID:27170709
Acoustic Treatment Design Scaling Methods. Volume 3; Test Plans, Hardware, Results, and Evaluation
NASA Technical Reports Server (NTRS)
Yu, J.; Kwan, H. W.; Echternach, D. K.; Kraft, R. E.; Syed, A. A.
1999-01-01
The ability to design, build, and test miniaturized acoustic treatment panels on scale-model fan rigs representative of the full-scale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. To be able to use scale model treatment as a full-scale design tool, it is necessary that the designer be able to reliably translate the scale model design and performance to an equivalent full-scale design. The primary objective of the study presented in this volume of the final report was to conduct laboratory tests to evaluate liner acoustic properties and validate advanced treatment impedance models. These laboratory tests include DC flow resistance measurements, normal incidence impedance measurements, DC flow and impedance measurements in the presence of grazing flow, and in-duct liner attenuation as well as modal measurements. Test panels were fabricated at three different scale factors (i.e., full-scale, half-scale, and one-fifth scale) to support laboratory acoustic testing. The panel configurations include single-degree-of-freedom (SDOF) perforated sandwich panels, SDOF linear (wire mesh) liners, and double-degree-of-freedom (DDOF) linear acoustic panels.
NASA Astrophysics Data System (ADS)
Doherty, W.; Lightfoot, P. C.; Ames, D. E.
2014-08-01
The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.
Optimal linguistic expression in negotiations depends on visual appearance
Kwon, Jinhwan; Tamada, Hikaru; Hirahara, Yumi
2018-01-01
We investigate the influence of the visual appearance of a negotiator on persuasiveness within the context of negotiations. Psychological experiments were conducted to quantitatively analyze the relationship between visual appearance and the use of language. Male and female participants were shown three female and male photographs, respectively. They were asked to report how they felt about each photograph using a seven-point semantic differential (SD) scale for six affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, and agreeableness). Participants then answered how they felt about each negotiation scenario (they were presented with pictures and a situation combined with negotiation sentences) using a seven-point SD scale for seven affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, agreeableness, and degree of persuasion). Two experiments were conducted using different participant groups depending on the negotiation situations. Photographs with good or bad appearances were found to show high or low degrees of persuasion, respectively. A multiple regression equation was obtained, indicating the importance of the three language factors (euphemistic, honorific, and sympathy expressions) to impressions made during negotiation. The result shows that there are optimal negotiation sentences based on various negotiation factors, such as visual appearance and use of language. For example, persons with good appearance might worsen their impression during negotiations by using certain language, although their initial impression was positive, and persons with bad appearance could effectively improve their impressions in negotiations through their use of language, although the final impressions of their negotiation counterpart might still be more negative than those for persons with good appearance. In contrast, the impressions made by persons of normal appearance were not easily affected by their use of language. The results of the present study have significant implications for future studies of effective negotiation strategies considering visual appearance as well as gender. PMID:29621361
Optimal linguistic expression in negotiations depends on visual appearance.
Sakamoto, Maki; Kwon, Jinhwan; Tamada, Hikaru; Hirahara, Yumi
2018-01-01
We investigate the influence of the visual appearance of a negotiator on persuasiveness within the context of negotiations. Psychological experiments were conducted to quantitatively analyze the relationship between visual appearance and the use of language. Male and female participants were shown three female and male photographs, respectively. They were asked to report how they felt about each photograph using a seven-point semantic differential (SD) scale for six affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, and agreeableness). Participants then answered how they felt about each negotiation scenario (they were presented with pictures and a situation combined with negotiation sentences) using a seven-point SD scale for seven affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, agreeableness, and degree of persuasion). Two experiments were conducted using different participant groups depending on the negotiation situations. Photographs with good or bad appearances were found to show high or low degrees of persuasion, respectively. A multiple regression equation was obtained, indicating the importance of the three language factors (euphemistic, honorific, and sympathy expressions) to impressions made during negotiation. The result shows that there are optimal negotiation sentences based on various negotiation factors, such as visual appearance and use of language. For example, persons with good appearance might worsen their impression during negotiations by using certain language, although their initial impression was positive, and persons with bad appearance could effectively improve their impressions in negotiations through their use of language, although the final impressions of their negotiation counterpart might still be more negative than those for persons with good appearance. In contrast, the impressions made by persons of normal appearance were not easily affected by their use of language. The results of the present study have significant implications for future studies of effective negotiation strategies considering visual appearance as well as gender.
Buckling Design and Imperfection Sensitivity of Sandwich Composite Launch-Vehicle Shell Structures
NASA Technical Reports Server (NTRS)
Schultz, Marc R.; Sleight, David W.; Myers, David E.; Waters, W. Allen, Jr.; Chunchu, Prasad B.; Lovejoy, Andrew W.; Hilburger, Mark W.
2016-01-01
Composite materials are increasingly being considered and used for launch-vehicle structures. For shell structures, such as interstages, skirts, and shrouds, honeycomb-core sandwich composites are often selected for their structural efficiency. Therefore, it is becoming increasingly important to understand the structural response, including buckling, of sandwich composite shell structures. Additionally, small geometric imperfections can significantly influence the buckling response, including considerably reducing the buckling load, of shell structures. Thus, both the response of the theoretically perfect structure and the buckling imperfection sensitivity must be considered during the design of such structures. To address the latter, empirically derived design factors, called buckling knockdown factors (KDFs), were developed by NASA in the 1960s to account for this buckling imperfection sensitivity during design. However, most of the test-article designs used in the development of these recommendations are not relevant to modern launch-vehicle constructions and material systems, and in particular, no composite test articles were considered. Herein, a two-part study on composite sandwich shells to (1) examine the relationship between the buckling knockdown factor and the areal mass of optimized designs, and (2) to interrogate the imperfection sensitivity of those optimized designs is presented. Four structures from recent NASA launch-vehicle development activities are considered. First, designs optimized for both strength and stability were generated for each of these structures using design optimization software and a range of buckling knockdown factors; it was found that the designed areal masses varied by between 6.1% and 19.6% over knockdown factors ranging from 0.6 to 0.9. Next, the buckling imperfection sensitivity of the optimized designs is explored using nonlinear finite-element analysis and the as-measured shape of a large-scale composite cylindrical shell. When compared with the current buckling design recommendations, the results suggest that the current recommendations are overly conservative and that the development of new recommendations could reduce the acreage areal mass of many composite sandwich shell designs by between 4% and 19%, depending on the structure.
Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.
Sun, Kangkang; Sui, Shuai; Tong, Shaocheng
2018-04-01
This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.
Multiple response optimization for higher dimensions in factors and responses
Lu, Lu; Chapman, Jessica L.; Anderson-Cook, Christine M.
2016-07-19
When optimizing a product or process with multiple responses, a two-stage Pareto front approach is a useful strategy to evaluate and balance trade-offs between different estimated responses to seek optimum input locations for achieving the best outcomes. After objectively eliminating non-contenders in the first stage by looking for a Pareto front of superior solutions, graphical tools can be used to identify a final solution in the second subjective stage to compare options and match with user priorities. Until now, there have been limitations on the number of response variables and input factors that could effectively be visualized with existing graphicalmore » summaries. We present novel graphical tools that can be more easily scaled to higher dimensions, in both the input and response spaces, to facilitate informed decision making when simultaneously optimizing multiple responses. A key aspect of these graphics is that the potential solutions can be flexibly sorted to investigate specific queries, and that multiple aspects of the solutions can be simultaneously considered. As a result, recommendations are made about how to evaluate the impact of the uncertainty associated with the estimated response surfaces on decision making with higher dimensions.« less
Diagnostic depressive symptoms of the mixed bipolar episode.
Cassidy, F; Ahearn, E; Murry, E; Forest, K; Carroll, B J
2000-03-01
There is not yet consensus on the best diagnostic definition of mixed bipolar episodes. Many have suggested the DSM-III-R/-IV definition is too rigid. We propose alternative criteria using data from a large patient cohort. We evaluated 237 manic in-patients using DSM-III-R criteria and the Scale for Manic States (SMS). A bimodally distributed factor of dysphoric mood has been reported from the SMS data. We used both the factor and the DSM-III-R classifications to identify candidate depressive symptoms and then developed three candidate depressive symptom sets. Using ROC analysis we determined the optimal threshold number of symptoms in each set and compared the three ROC solutions. The optimal solution was tested against the DSM-III-R classification for crossvalidation. The optimal ROC solution was a set, derived from both the DSM-III-R and the SMS, and the optimal threshold for diagnosis was two or more symptoms. Applying this set iteratively to the DSM-III-R classification produced the identical ROC solution. The prevalence of mixed episodes in the cohort was 13.9% by DSM-III-R, 20.2% by the dysphoria factor and 27.4% by the new ROC solution. A diagnostic set of six dysphoric symptoms (depressed mood, anhedonia, guilt, suicide, fatigue and anxiety), with a threshold of two symptoms, is proposed for a mixed episode. This new definition has a foundation in clinical data, in the proved diagnostic performance of the qualifying symptoms, and in ROC validation against two previous definitions that each have face validity.
Oh, Jihoon; Chae, Jeong-Ho
2018-04-01
Although heart rate variability (HRV) may be a crucial marker of mental health, how it is related to positive psychological factors (i.e. attitude to life and positive thinking) is largely unknown. Here we investigated the correlation of HRV linear and nonlinear dynamics with psychological scales that measured degree of optimism and happiness in patients with anxiety disorders. Results showed that low- to high-frequency HRV ratio (LF/HF) was increased and the HRV HF parameter was decreased in subjects who were more optimistic and who felt happier in daily living. Nonlinear analysis also showed that HRV dispersion and regulation were significantly correlated with the subjects' optimism and purpose in life. Our findings showed that HRV properties might be related to degree of optimistic perspectives on life and suggests that HRV markers of autonomic nervous system function could reflect positive human mind states.
Aerodynamic configuration design using response surface methodology analysis
NASA Technical Reports Server (NTRS)
Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit
1993-01-01
An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.
NASA Astrophysics Data System (ADS)
Paloma, Cynthia S.
The plasma electron temperature (Te) plays a critical role in a tokamak nu- clear fusion reactor since temperatures on the order of 108K are required to achieve fusion conditions. Many plasma properties in a tokamak nuclear fusion reactor are modeled by partial differential equations (PDE's) because they depend not only on time but also on space. In particular, the dynamics of the electron temperature is governed by a PDE referred to as the Electron Heat Transport Equation (EHTE). In this work, a numerical method is developed to solve the EHTE based on a custom finite-difference technique. The solution of the EHTE is compared to temperature profiles obtained by using TRANSP, a sophisticated plasma transport code, for specific discharges from the DIII-D tokamak, located at the DIII-D National Fusion Facility in San Diego, CA. The thermal conductivity (also called thermal diffusivity) of the electrons (Xe) is a plasma parameter that plays a critical role in the EHTE since it indicates how the electron temperature diffusion varies across the minor effective radius of the tokamak. TRANSP approximates Xe through a curve-fitting technique to match experimentally measured electron temperature profiles. While complex physics-based model have been proposed for Xe, there is a lack of a simple mathematical model for the thermal diffusivity that could be used for control design. In this work, a model for Xe is proposed based on a scaling law involving key plasma variables such as the electron temperature (Te), the electron density (ne), and the safety factor (q). An optimization algorithm is developed based on the Sequential Quadratic Programming (SQP) technique to optimize the scaling factors appearing in the proposed model so that the predicted electron temperature and magnetic flux profiles match predefined target profiles in the best possible way. A simulation study summarizing the outcomes of the optimization procedure is presented to illustrate the potential of the proposed modeling method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Matthew, E-mail: matthew.schmidt@varian.com; Grzetic, Shelby; Lo, Joseph Y.
Purpose: Prior work by the authors and other groups has studied the creation of automated intensity modulated radiotherapy (IMRT) plans of equivalent quality to those in a patient database of manually created clinical plans; those database plans provided guidance on the achievable sparing to organs-at-risk (OARs). However, in certain sites, such as head-and-neck, the clinical plans may not be sufficiently optimized because of anatomical complexity and clinical time constraints. This could lead to automated plans that suboptimally exploit OAR sparing. This work investigates a novel dose warping and scaling scheme that attempts to reduce effects of suboptimal sparing in clinicalmore » database plans, thus improving the quality of semiautomated head-and-neck cancer (HNC) plans. Methods: Knowledge-based radiotherapy (KBRT) plans for each of ten “query” patients were semiautomatically generated by identifying the most similar “match” patient in a database of 103 clinical manually created patient plans. The match patient’s plans were adapted to the query case by: (1) deforming the match beam fluences to suit the query target volume and (2) warping the match primary/boost dose distribution to suit the query geometry and using the warped distribution to generate query primary/boost optimization dose-volume constraints. Item (2) included a distance scaling factor to improve query OAR dose sparing with respect to the possibly suboptimal clinical match plan. To further compensate for a component plan of the match case (primary/boost) not optimally sparing OARs, the query dose volume constraints were reduced using a dose scaling factor to be the minimum from either (a) the warped component plan (primary or boost) dose distribution or (b) the warped total plan dose distribution (primary + boost) scaled in proportion to the ratio of component prescription dose to total prescription dose. The dose-volume constraints were used to plan the query case with no human intervention to adjust constraints during plan optimization. Results: KBRT and original clinical plans were dosimetrically equivalent for parotid glands (mean/median doses), spinal cord, and brainstem (maximum doses). KBRT plans significantly reduced larynx median doses (21.5 ± 6.6 Gy to 17.9 ± 3.9 Gy), and oral cavity mean (32.3 ± 6.2 Gy to 28.9 ± 5.4 Gy) and median (28.7 ± 5.7 Gy to 23.2 ± 5.3 Gy) doses. Doses to ipsilateral parotid gland, larynx, oral cavity, and brainstem were lower or equivalent in the KBRT plans for the majority of cases. By contrast, KBRT plans generated without the dose warping and dose scaling steps were not significantly different from the clinical plans. Conclusions: Fast, semiautomatically generated HNC IMRT plans adapted from existing plans in a clinical database can be of equivalent or better quality than manually created plans. The reductions in OAR doses in the semiautomated plans, compared to the clinical plans, indicate that the proposed dose warping and scaling method shows promise in mitigating the impact of suboptimal clinical plans.« less
[Motivation to quit smoking among ex-smoker university workers and students].
Behn, V; Sotomayor, H; Cruz, M; Naveas, R
2001-05-01
In Chile, 10% of deaths in adults are directly attributed to smoking. To identify intrinsic and extrinsic motivations to quit smoking among a group of subjects that quitted without external help. The motivations to quit smoking were measured using the 20 items Reasons for Quitting Scale (RFQ), in 145 ex smokers (80 students and 65 workers at The University of Conception). The scale identifies intrinsic motivations in the categories health and self control and extrinsic motivations in the categories immediate reinforcement and social pressure. Factorial analysis with orthogonal rotation of the 20 items of the scale, suggested an optimal solution with five factors, that had a maximal impact of 0.43 and explained the motivations in up to a 66% of workers and 65% of students. The factors with the greater impact were the items of immediate reinforcement, social pressure and self control. The category health had only a 6% influence in the modification of smoking habits. The most important motivations to quit smoking in this sample were an immediate reinforcement, social pressure and self control. The analysis of motivations will help to orient smoking cessation programs.
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Factors related to community participation by stroke victims six month post-stroke.
Jalayondeja, Chutima; Kaewkungwal, Jaranit; Sullivan, Patricia E; Nidhinandana, Samart; Pichaiyongwongdee, Sopa; Jareinpituk, Sutthi
2011-07-01
Participation in the community socially by stroke victims is an optimal outcome post-stroke. We carried out a cohort study to evaluate a model for community participation by Thai stroke victims 6 months post-stroke. Six standardized instruments were used to assess the patient's status 1, 3 and 6 months after stroke. These were the modified Rankin Scale, the National Institute of Health Stroke Scale, the Fugl-Meyer Assessment and the Berg Balance Scale. The performance of activities of daily living and community ambulation were measured using the Barthel Index and walking velocity. Participation in the community was measured by the Stroke Impact Scale. The outcomes demographics and stroke related variables were analyzed using the Generalized Estimating Equations. Of the 98 subjects who completed the follow-up assessment, 72 (86.5%) felt they had more participation in the community 6 months post-stroke. The level of disability, performance of independent activities and length of time receiving physical therapy were associated with the perceived level of participation in the community among stroke victims 6 months post-stroke. To achieve a goal of good participation in the community among stroke victims, health care planning should focus on improving the stroke victim's ability to independently perform daily activities. The average length of physical therapy ranged from 1 to 6 months, at 3 to 8 hours/month. Clinical practice guidelines should be explored to optimize participation in the community.
2016-08-10
AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4. TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been
Psychosocial factors associated with flourishing among Australian HIV-positive gay men.
Lyons, Anthony; Heywood, Wendy; Rozbroj, Tomas
2016-09-15
Mental health outcomes among HIV-positive gay men are generally poorer than in the broader population. However, not all men in this population experience mental health problems. Although much is known about factors associated with depression and anxiety among HIV-positive gay men, little is known about factors associated with positive mental health. Such knowledge can be useful for optimizing well-being support programs for HIV-positive gay men. In this study, we examined flourishing, which broadly covers most aspects of positive mental health. A sample of 357 Australian HIV-positive gay men completed a survey on their mental health and well-being, including the Flourishing Scale. Given the lack of previous research, we explored a wide range of psychosocial factors, including demographics, stigma, discrimination, and social support, to identify key factors linked to flourishing. The sample showed a similar level of flourishing to those in general population samples. Several independent factors were found to be associated with flourishing outcomes. Those who were most likely to be flourishing tended to have low or no internalized HIV-related stigma, were employed, received higher levels of practical support, had a sense of companionship with others, and felt supported by family. These and other findings presented in this article may be used to help inform strategies for promoting optimal levels of mental health, and its associated general health benefits, among HIV-positive gay men.
Public-private delivery of insecticide-treated nets: a voucher scheme in Volta Region, Ghana
Kweku, Margaret; Webster, Jayne; Taylor, Ian; Burns, Susan; Dedzo, McDamien
2007-01-01
Background Coverage of vulnerable groups with insecticide-treated nets (ITNs) in Ghana, as in the majority of countries of sub-Saharan Africa is currently low. A voucher scheme was introduced in Volta Region as a possible sustainable delivery system for increasing this coverage through scale-up to other regions. Successful scale-up of public health interventions depends upon optimal delivery processes but operational research for delivery processes in large-scale implementation has been inadequate. Methods A simple tool was developed to monitor numbers of vouchers given to each health facility, numbers issued to pregnant women by the health staff, and numbers redeemed by the distributors back to the management agent. Three rounds of interviews were undertaken with health facility staff, retailers and pregnant women who had attended antenatal clinic (ANC). Results During the one year pilot 25,926 vouchers were issued to eligible women from clinics, which equates to 50.7% of the 51,658 ANC registrants during this time period. Of the vouchers issued 66.7% were redeemed by distributors back to the management agent. Initially, non-issuing of vouchers to pregnant women was mainly due to eligibility criteria imposed by the midwives; later in the year it was due to decisions of the pregnant women, and supply constraints. These in turn were heavily influenced by factors external to the programme: current household ownership of nets, competing ITN delivery strategies, and competition for the limited number of ITNs available in the country from major urban areas of other regions. Conclusion Both issuing and redemption of vouchers should be monitored as factors assumed to influence voucher redemption had an influence on issuing, and vice versa. More evidence is needed on how specific contextual factors influence the success of voucher schemes and other models of delivery of ITNs. Such an evidence base will facilitate optimal strategic decision making so that the delivery model with the best probability of success within a given context is implemented. Rigorous monitoring has an important role to play in the successful scaling-up of delivery of effective public health interventions. PMID:17274810
Karichappan, Thirugnanasambandham; Venkatachalam, Sivakumar; Jeganathan, Prakash Maran
2014-01-10
Discharge of grey wastewater into the ecological system causes the negative impact effect on receiving water bodies. In this present study, electrocoagulation process (EC) was investigated to treat grey wastewater under different operating conditions such as initial pH (4-8), current density (10-30 mA/cm2), electrode distance (4-6 cm) and electrolysis time (5-25 min) by using stainless steel (SS) anode in batch mode. Four factors with five levels Box-Behnken response surface design (BBD) was employed to optimize and investigate the effect of process variables on the responses such as total solids (TS), chemical oxygen demand (COD) and fecal coliform (FC) removal. The process variables showed significant effect on the electrocoagulation treatment process. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to study the electrocoagulation process statistically. The optimal operating conditions were found to be: initial pH of 7, current density of 20 mA/cm2, electrode distance of 5 cm and electrolysis time of 20 min. These results indicated that EC process can be scale up in large scale level to treat grey wastewater with high removal efficiency of TS, COD and FC.
Optimizing adherence to antiretroviral therapy
Sahay, Seema; Reddy, K. Srikanth; Dhayarkar, Sampada
2011-01-01
HIV has now become a manageable chronic disease. However, the treatment outcomes may get hampered by suboptimal adherence to ART. Adherence optimization is a concrete reality in the wake of ‘universal access’ and it is imperative to learn lessons from various studies and programmes. This review examines current literature on ART scale up, treatment outcomes of the large scale programmes and the role of adherence therein. Social, behavioural, biological and programme related factors arise in the context of ART adherence optimization. While emphasis is laid on adherence, retention of patients under the care umbrella emerges as a major challenge. An in-depth understanding of patients’ health seeking behaviour and health care delivery system may be useful in improving adherence and retention of patients in care continuum and programme. A theoretical framework to address the barriers and facilitators has been articulated to identify problematic areas in order to intervene with specific strategies. Empirically tested objective adherence measurement tools and approaches to assess adherence in clinical/ programme settings are required. Strengthening of ART programmes would include appropriate policies for manpower and task sharing, integrating traditional health sector, innovations in counselling and community support. Implications for the use of theoretical model to guide research, clinical practice, community involvement and policy as part of a human rights approach to HIV disease is suggested. PMID:22310817
[The optimizing design and experiment for a MOEMS micro-mirror spectrometer].
Mo, Xiang-xia; Wen, Zhi-yu; Zhang, Zhi-hai; Guo, Yuan-jun
2011-12-01
A MOEMS micro-mirror spectrometer, which uses micro-mirror as a light switch so that spectrum can be detected by a single detector, has the advantages of transforming DC into AC, applying Hadamard transform optics without additional template, high pixel resolution and low cost. In this spectrometer, the vital problem is the conflict between the scales of slit and the light intensity. Hence, in order to improve the resolution of this spectrometer, the present paper gives the analysis of the new effects caused by micro structure, and optimal values of the key factors. Firstly, the effects of diffraction limitation, spatial sample rate and curved slit image on the resolution of the spectrum were proposed. Then, the results were simulated; the key values were tested on the micro mirror spectrometer. Finally, taking all these three effects into account, this micro system was optimized. With a scale of 70 mm x 130 mm, decreasing the height of the image at the plane of micro mirror can not diminish the influence of curved slit image in the spectrum; under the demand of spatial sample rate, the resolution must be twice over the pixel resolution; only if the width of the slit is 1.818 microm and the pixel resolution is 2.2786 microm can the spectrometer have the best performance.
NASA Astrophysics Data System (ADS)
He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.
2013-12-01
Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that the optimized fw is best correlated linearly to soil water content at 5 to 10 cm depth. We also found that both the temporal scale or window size and the priori uncertainty of Vcmax (given as its standard deviation) are important in determining the seasonal trajectory of Vcmax. During the leaf expansion stage, an appropriate window size leads to reasonable estimate of Vcmax. In the summer, the fluctuation of optimized Vcmax is mainly caused by the uncertainties in Vcmax but not the window size. Our study suggests that a smooth Vcmax curve optimized from an optimal time window size is close to the reality though the RMSE of GPP at this window is not the minimum. It also suggests that for the accurate optimization of Vcmax, it is necessary to set appropriate levels of uncertainty of Vcmax in the spring and summer because the rate of leaf nitrogen concentration change is different over the season. Parameter optimizations for more sites and multi-years are in progress.
Wang, Dongmin; Liu, Xiangnan
2018-03-06
Remote sensing can actively monitor heavy metal contamination in crops, but with the increase of satellite sensors, the optimal scale for monitoring heavy metal stress in rice is still unknown. This study focused on identifying the optimal scale by comparing the ability to detect heavy metal stress in rice at various spatial scales. The 2 m, 8 m, and 16 m resolution GF-1 (China) data and the 30 m resolution HJ-1 (China) data were used to invert leaf area index (LAI). The LAI was the input parameter of the World Food Studies (WOFOST) model, and we obtained the dry weight of storage organs (WSO) and dry weight of roots (WRT) through the assimilation method; then, the mass ratio of rice storage organs and roots (SORMR) was calculated. Through the comparative analysis of SORMR at each spatial scale of data, we determined the optimal scale to monitor heavy metal stress in rice. The following conclusions were drawn: (1) SORMR could accurately and effectively monitor heavy metal stress; (2) the 8 m and 16 m images from GF-1 were suitable for monitoring heavy metal stress in rice; (3) 16 m was considered the optimal scale to assess heavy metal stress in rice.
Singh, Kunwar P; Rai, Premanjali; Pandey, Priyanka; Sinha, Sarita
2012-01-01
The present research aims to investigate the individual and interactive effects of chlorine dose/dissolved organic carbon ratio, pH, temperature, bromide concentration, and reaction time on trihalomethanes (THMs) formation in surface water (a drinking water source) during disinfection by chlorination in a prototype laboratory-scale simulation and to develop a model for the prediction and optimization of THMs levels in chlorinated water for their effective control. A five-factor Box-Behnken experimental design combined with response surface and optimization modeling was used for predicting the THMs levels in chlorinated water. The adequacy of the selected model and statistical significance of the regression coefficients, independent variables, and their interactions were tested by the analysis of variance and t test statistics. The THMs levels predicted by the model were very close to the experimental values (R(2) = 0.95). Optimization modeling predicted maximum (192 μg/l) TMHs formation (highest risk) level in water during chlorination was very close to the experimental value (186.8 ± 1.72 μg/l) determined in laboratory experiments. The pH of water followed by reaction time and temperature were the most significant factors that affect the THMs formation during chlorination. The developed model can be used to determine the optimum characteristics of raw water and chlorination conditions for maintaining the THMs levels within the safe limit.
COPS: Large-scale nonlinearly constrained optimization problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondarenko, A.S.; Bortz, D.M.; More, J.J.
2000-02-10
The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.
Using the PORS Problems to Examine Evolutionary Optimization of Multiscale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reinhart, Zachary; Molian, Vaelan; Bryden, Kenneth
2013-01-01
Nearly all systems of practical interest are composed of parts assembled across multiple scales. For example, an agrodynamic system is composed of flora and fauna on one scale; soil types, slope, and water runoff on another scale; and management practice and yield on another scale. Or consider an advanced coal-fired power plant: combustion and pollutant formation occurs on one scale, the plant components on another scale, and the overall performance of the power system is measured on another. In spite of this, there are few practical tools for the optimization of multiscale systems. This paper examines multiscale optimization of systemsmore » composed of discrete elements using the plus-one-recall-store (PORS) problem as a test case or study problem for multiscale systems. From this study, it is found that by recognizing the constraints and patterns present in discrete multiscale systems, the solution time can be significantly reduced and much more complex problems can be optimized.« less
Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J
2017-07-14
In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Henry, Julie D; Crawford, John R
2005-06-01
To test the construct validity of the short-form version of the Depression anxiety and stress scale (DASS-21), and in particular, to assess whether stress as indexed by this measure is synonymous with negative affectivity (NA) or whether it represents a related, but distinct, construct. To provide normative data for the general adult population. Cross-sectional, correlational and confirmatory factor analysis (CFA). The DASS-21 was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,794). Competing models of the latent structure of the DASS-21 were evaluated using CFA. The model with optimal fit (RCFI = 0.94) had a quadripartite structure, and consisted of a general factor of psychological distress plus orthogonal specific factors of depression, anxiety, and stress. This model was a significantly better fit than a competing model that tested the possibility that the Stress scale simply measures NA. The DASS-21 subscales can validly be used to measure the dimensions of depression, anxiety, and stress. However, each of these subscales also taps a more general dimension of psychological distress or NA. The utility of the measure is enhanced by the provision of normative data based on a large sample.
Field Scale Optimization for Long-Term Sustainability of Best Management Practices in Watersheds
NASA Astrophysics Data System (ADS)
Samuels, A.; Babbar-Sebens, M.
2012-12-01
Agricultural and urban land use changes have led to disruption of natural hydrologic processes and impairment of streams and rivers. Multiple previous studies have evaluated Best Management Practices (BMPs) as means for restoring existing hydrologic conditions and reducing impairment of water resources. However, planning of these practices have relied on watershed scale hydrologic models for identifying locations and types of practices at scales much coarser than the actual field scale, where landowners have to plan, design and implement the practices. Field scale hydrologic modeling provides means for identifying relationships between BMP type, spatial location, and the interaction between BMPs at a finer farm/field scale that is usually more relevant to the decision maker (i.e. the landowner). This study focuses on development of a simulation-optimization approach for field-scale planning of BMPs in the School Branch stream system of Eagle Creek Watershed, Indiana, USA. The Agricultural Policy Environmental Extender (APEX) tool is used as the field scale hydrologic model, and a multi-objective optimization algorithm is used to search for optimal alternatives. Multiple climate scenarios downscaled to the watershed-scale are used to test the long term performance of these alternatives and under extreme weather conditions. The effectiveness of these BMPs under multiple weather conditions are included within the simulation-optimization approach as a criteria/goal to assist landowners in identifying sustainable design of practices. The results from these scenarios will further enable efficient BMP planning for current and future usage.
Rafique, Rafia; Anjum, Afifa
2015-01-01
Coronary Heart Disease (CHD) occurs to a greater extent in developed than developing countries like Pakistan. Our understanding of risk factors leading to this disease in women, are largely derived from studies carried out on samples obtained from developed countries. Since prevalence of CHD in Pakistan is growing, it seems pertinent to infer risk and protective factors prevalent within the Pakistani women. This case control study investigated the role of psychological, traditional and gender specific risk and protective factors for Angina in a sample of Pakistani women aged between 35-65 years. Female patients admitted with first episode of Angina fulfilling the study inclusion/exclusion criteria were recruited within the first three days of stay in the hospital. One control per case matched on age was recruited. Translated versions of standardized tools: Life Orientation Test (LOT), The Hope Scale, Subjective Happiness Scale and Depression, Anxiety and Stress Scale (DASS) were used to measure the psychological variables. Information on medical conditions like diabetes, hypertension, family history of IHD, presence and absence of menopause and use of oral contraceptive pills was obtained from the participants. Body Mass Index for cases and controls was calculated separately with the help of height and weight recorded for the participants. Multivariate logistic regression analyses revealed that depression, anxiety and stress are risk factors, were as optimism and hope are protective predictors of Angina. 64% and 85% of variance in Angina were attributed to psychological factors. Menopause, diabetes and hypertension are significantly associated with the risk of Angina, explaining 37% and 49% of variance in Angina. The study provides evidence for implementation of gender specific risk assessment and preventive strategies for Angina. The study gives directions for large scale prospective, epidemiological, longitudinal as well as interventional studies, to be tailored for indigenous population and secondly development and standardization of measures to appraise psychological factors of Angina prevalent within the Pakistani population.
Network placement optimization for large-scale distributed system
NASA Astrophysics Data System (ADS)
Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng
2018-01-01
The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.
Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming
2007-01-01
This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.
Ji, Yu; Tian, Yang; Ahnfelt, Mattias; Sui, Lili
2014-06-27
Multivalent pneumococcal vaccines were used worldwide to protect human beings from pneumococcal diseases. In order to eliminate the toxic organic solutions used in the traditional vaccine purification process, an alternative chromatographic process for Streptococcus pneumoniae serotype 23F capsular polysaccharide (CPS) was proposed in this study. The strategy of Design of Experiments (DoE) was introduced into the process development to solve the complicated design procedure. An initial process analysis was given to review the whole flowchart, identify the critical factors of chromatography through FMEA and chose the flowthrough mode due to the property of the feed. A resin screening study was then followed to select candidate resins. DoE was utilized to generate a resolution IV fractional factorial design to further compare candidates and narrow down the design space. After Capto Adhere was selected, the Box-Behnken DoE was executed to model the process and characterize all effects of factors on the responses. Finally, Monte Carlo simulation was used to optimize the process, test the chosen optimal conditions and define the control limit. The results of three scale-up runs at set points verified the DoE and simulation predictions. The final results were well in accordance with the EU pharmacopeia requirements: Protein/CPS (w/w) 1.08%; DNA/CPS (w/w) 0.61%; the phosphorus content 3.1%; the nitrogen 0.315% and the Methyl-pentose percentage 47.9%. Other tests of final pure CPS also met the pharmacopeia specifications. This alternative chromatographic purification process for pneumococcal vaccine without toxic organic solvents was successfully developed by the DoE approach and proved scalability, robustness and suitability for large scale manufacturing. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Andrews, A. E.; Hu, L.; Thoning, K. W.; Nehrkorn, T.; Mountain, M. E.; Jacobson, A. R.; Michalak, A.; Dlugokencky, E. J.; Sweeney, C.; Worthy, D. E. J.; Miller, J. B.; Fischer, M. L.; Biraud, S.; van der Velde, I. R.; Basu, S.; Tans, P. P.
2017-12-01
CarbonTracker-Lagrange (CT-L) is a new high-resolution regional inverse modeling system for improved estimation of North American CO2 fluxes. CT-L uses footprints from the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by high-resolution (10 to 30 km) meteorological fields from the Weather Research and Forecasting (WRF) model. We performed a suite of synthetic-data experiments to evaluate a variety of inversion configurations, including (1) solving for scaling factors to an a priori flux versus additive corrections, (2) solving for fluxes at 3-hrly resolution versus at coarser temporal resolution, (3) solving for fluxes at 1o × 1o resolution versus at large eco-regional scales. Our framework explicitly and objectively solves for the optimal solution with a full error covariance matrix with maximum likelihood estimation, thereby enabling rigorous uncertainty estimates for the derived fluxes. In the synthetic-data inversions, we find that solving for weekly scaling factors of a priori Net Ecosystem Exchange (NEE) at 1o × 1o resolution with optimization of diurnal cycles of CO2 fluxes yields faithful retrieval of the specified "true" fluxes as those solved at 3-hrly resolution. In contrast, a scheme that does not allow for optimization of diurnal cycles of CO2 fluxes suffered from larger aggregation errors. We then applied the optimal inversion setup to estimate North American fluxes for 2007-2015 using real atmospheric CO2 observations, multiple prior estimates of NEE, and multiple boundary values estimated from the NOAA's global Eulerian CarbonTracker (CarbonTracker) and from an empirical approach. Our derived North American land CO2 fluxes show larger seasonal amplitude than those estimated from the CarbonTracker, removing seasonal biases in the CarbonTracker's simulated CO2 mole fractions. Independent evaluations using in-situ CO2 eddy covariance flux measurements and independent aircraft profiles also suggest an improved estimation on North American CO2 fluxes from CT-L. Furthermore, our derived CO2 flux anomalies over North America corresponding to the 2012 North American drought and the 2015 El Niño are larger than derived by the CarbonTracker. They also indicate different responses of ecosystems to those anomalous climatic events.
Interference of psychological factors in difficult-to-control asthma.
Halimi, Laurence; Vachier, Isabelle; Varrin, Muriel; Godard, Philippe; Pithon, Gérard; Chanez, Pascal
2007-01-01
Most patients with asthma can be controlled with suitable medication, but 5-10% of them remain difficult to control despite optimal management. We investigated whether patients with difficult-to-control asthma (DCA) or controlled asthma (CA) differ with respect to psychological factors, such as general control beliefs on life events. DCA was defined as an absence of control despite optimal management. Recent control was measured using the Asthma Control Questionnaire. General control beliefs were investigated using a Locus of Control scale (LOC). Patients with DCA had a significantly higher external LOC as compared to patients with CA (P=0.01). In the DCA group, the hospital admission rate was highly significant in association with the external LOC (P=0.004) as compared to the internal LOC trend. This study showed that patients with DCA had different general control beliefs which might have hampered their management and interfered with their therapeutic adherence. The present findings could enhance management of DCA in a clinical setting.
Seo, Eun Hee; Kim, Tae Oh; Park, Min Jae; Joo, Hee Rin; Heo, Nae Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo
2012-03-01
Several factors influence bowel preparation quality. Recent studies have indicated that the time interval between bowel preparation and the start of colonoscopy is also important in determining bowel preparation quality. To evaluate the influence of the preparation-to-colonoscopy (PC) interval (the interval of time between the last polyethylene glycol dose ingestion and the start of the colonoscopy) on bowel preparation quality in the split-dose method for colonoscopy. Prospective observational study. University medical center. A total of 366 consecutive outpatients undergoing colonoscopy. Split-dose bowel preparation and colonoscopy. The quality of bowel preparation was assessed by using the Ottawa Bowel Preparation Scale according to the PC interval, and other factors that might influence bowel preparation quality were analyzed. Colonoscopies with a PC interval of 3 to 5 hours had the best bowel preparation quality score in the whole, right, mid, and rectosigmoid colon according to the Ottawa Bowel Preparation Scale. In multivariate analysis, the PC interval (odds ratio [OR] 1.85; 95% CI, 1.18-2.86), the amount of PEG ingested (OR 4.34; 95% CI, 1.08-16.66), and compliance with diet instructions (OR 2.22l 95% CI, 1.33-3.70) were significant contributors to satisfactory bowel preparation. Nonrandomized controlled, single-center trial. The optimal time interval between the last dose of the agent and the start of colonoscopy is one of the important factors to determine satisfactory bowel preparation quality in split-dose polyethylene glycol bowel preparation. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Defined Serum-Free Medium for Bioreactor Culture of an Immortalized Human Erythroblast Cell Line.
Lee, Esmond; Lim, Zhong Ri; Chen, Hong-Yu; Yang, Bin Xia; Lam, Alan Tin-Lun; Chen, Allen Kuan-Liang; Sivalingam, Jaichandran; Reuveny, Shaul; Loh, Yuin-Han; Oh, Steve Kah-Weng
2018-04-01
Anticipated shortages in donated blood supply have prompted investigation of alternative approaches for in vitro production of red blood cells (RBCs), such as expansion of conditional immortalization erythroid progenitors. However, there is a bioprocessing challenge wherein factors promoting maximal cell expansion and growth-limiting inhibitory factors are yet to be investigated. The authors use an erythroblast cell line (ImEry) derived from immortalizing CD71+CD235a+ erythroblast from adult peripheral blood for optimization of expansion culture conditions. Design of experiments (DOE) is used in media formulation to explore relationships and interactive effects between factors which affect cell expansion. Our in-house optimized medium formulation produced significantly higher cell densities (3.62 ± 0.055) × 10 6 cells mL -1 , n = 3) compared to commercial formulations (2.07 ± 0.055) × 10 6 cells mL -1 , n = 3; at 209 h culture). Culture media costs per unit of blood is shown to have a 2.96-3.09 times cost reduction. As a proof of principle for scale up, ImEry are expanded in a half-liter stirred-bioreactor under controlled settings. Growth characteristics, metabolic, and molecular profile of the cells are evaluated. ImEry has identical O 2 binding capacity to adult erythroblasts. Amino acid supplementation results in further yield improvements. The study serves as a first step for scaling up erythroblast expansion in controlled bioreactors. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Convergent evolution of vascular optimization in kelp (Laminariales).
Drobnitch, Sarah Tepler; Jensen, Kaare H; Prentice, Paige; Pittermann, Jarmila
2015-10-07
Terrestrial plants and mammals, although separated by a great evolutionary distance, have each arrived at a highly conserved body plan in which universal allometric scaling relationships govern the anatomy of vascular networks and key functional metabolic traits. The universality of allometric scaling suggests that these phyla have each evolved an 'optimal' transport strategy that has been overwhelmingly adopted by extant species. To truly evaluate the dominance and universality of vascular optimization, however, it is critical to examine other, lesser-known, vascularized phyla. The brown algae (Phaeophyceae) are one such group--as distantly related to plants as mammals, they have convergently evolved a plant-like body plan and a specialized phloem-like transport network. To evaluate possible scaling and optimization in the kelp vascular system, we developed a model of optimized transport anatomy and tested it with measurements of the giant kelp, Macrocystis pyrifera, which is among the largest and most successful of macroalgae. We also evaluated three classical allometric relationships pertaining to plant vascular tissues with a diverse sampling of kelp species. Macrocystis pyrifera displays strong scaling relationships between all tested vascular parameters and agrees with our model; other species within the Laminariales display weak or inconsistent vascular allometries. The lack of universal scaling in the kelps and the presence of optimized transport anatomy in M. pyrifera raises important questions about the evolution of optimization and the possible competitive advantage conferred by optimized vascular systems to multicellular phyla. © 2015 The Author(s).
Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai
2018-01-01
In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.
Coherence in quantum estimation
NASA Astrophysics Data System (ADS)
Giorda, Paolo; Allegra, Michele
2018-01-01
The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.
NASA Astrophysics Data System (ADS)
Xie, Shilin; Lu, Fei; Cao, Lei; Zhou, Weiqi; Ouyang, Zhiyun
2016-07-01
Understanding the factors that influence the characteristics of avian communities using urban parks at both the patch and landscape level is important to focus management effort towards enhancing bird diversity. Here, we investigated this issue during the breeding season across urban parks in Beijing, China, using high-resolution satellite imagery. Fifty-two bird species were recorded across 29 parks. Analysis of residence type of birds showed that passengers were the most prevalent (37%), indicating that Beijing is a major node in the East Asian-Australasian Flyway. Park size was crucial for total species abundance, but foliage height diversity was the most important factor influencing avian species diversity. Thus, optimizing the configuration of vertical vegetation structure in certain park areas is critical for supporting avian communities in urban parks. Human visitation also showed negative impact on species diversity. At the landscape level, the percentage of artificial surface and largest patch index of woodland in the buffer region significantly affected total species richness, with insectivores and granivores being more sensitive to the landscape pattern of the buffer region. In conclusion, urban birds in Beijing are influenced by various multi-scale factors; however, these effects vary with different feeding types.
Xie, Shilin; Lu, Fei; Cao, Lei; Zhou, Weiqi; Ouyang, Zhiyun
2016-07-11
Understanding the factors that influence the characteristics of avian communities using urban parks at both the patch and landscape level is important to focus management effort towards enhancing bird diversity. Here, we investigated this issue during the breeding season across urban parks in Beijing, China, using high-resolution satellite imagery. Fifty-two bird species were recorded across 29 parks. Analysis of residence type of birds showed that passengers were the most prevalent (37%), indicating that Beijing is a major node in the East Asian-Australasian Flyway. Park size was crucial for total species abundance, but foliage height diversity was the most important factor influencing avian species diversity. Thus, optimizing the configuration of vertical vegetation structure in certain park areas is critical for supporting avian communities in urban parks. Human visitation also showed negative impact on species diversity. At the landscape level, the percentage of artificial surface and largest patch index of woodland in the buffer region significantly affected total species richness, with insectivores and granivores being more sensitive to the landscape pattern of the buffer region. In conclusion, urban birds in Beijing are influenced by various multi-scale factors; however, these effects vary with different feeding types.
What Factors Influence Well-being of Students on Performing Small Group Discussion?
NASA Astrophysics Data System (ADS)
Wulanyani, N. M. S.; Vembriati, N.
2018-01-01
Generally, Faculty of Medicine of Udayana University applied Small Group Discussion (SGD) in its learning process. If group problem solving succeeds, each individual of the group will individually succeed. However, the success is also determined by each individual’s level of psychological well-being. When the students are in the high level of wellbeing, they will feel comfortable in small group discussion, and teamwork will be effective. Therefore, it is needed to conduct a research which investigates how psychological factors, such as traits, needs, cognitive, and social intelligence, influence students’ wellbeing in performing SGD. This research is also initiated by several cases of students who prefer individual learning and take SGD merely to fulfill attendance requirement. If the students have good wellbeing, they will take the SGD process optimally. The subject of this research was 100 students of Faculty of Medicine of Udayana University. This survey research used psychological test assessment, Psychological well-being scale, and Social Intelligence scale to gain data analyzed quantitatively. The results showed that all aspects of traits together with aspects ‘need for rules and supervision’ affect social intelligence. Furthermore, social intelligence factor with cognitive factors influence wellbeing of the students in the process of SGD.
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.
2018-04-01
A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.
Akbari, Fariba; Eskandani, Morteza; Khosroushahi, Ahmad Yari
2014-11-01
Microalgae have been used in food, cosmetic, and biofuel industries as a natural source of lipids, vitamins, pigments and antioxidants for a long time. Green microalgae, as potent photobioreactors, can be considered as an economical expression system to produce recombinant therapeutical proteins at large-scale due to low cost of production and scaling-up capitalization owning to the inexpensive medium requirement, fast growth rate, and the ease of manipulation. These microalgae possess all benefit eukaryotic expression systems including the ability of post-translational modifications required for proper folding and stability of active proteins. Among the many items regarded as recombinant protein production, this review compares the different expression systems with green microalgae like Dunaliella by viewing the nuclear/chloroplast transformation challenges/benefits, related selection markers/reporter genes, and crucial factors/strategies affecting the increase of foreign protein expression in microalgae transformants. Some important factors were discussed regarding the increase of protein yielding in microalgae transformants including: transformation-associated genotypic modifications, endogenous regulatory factors, promoters, codon optimization, enhancer elements, and milking of recombinant protein.
Investigation on maternal physiological and psychological factors of cheilopalatognathus.
Ma, J; Zhao, W; Ma, R M; Li, X J; Wen, Z H; Liu, X F; Hu, W D; Zhang, C B
2013-01-01
Case-control study on mothers of cheilopalatognathus children was conducted, to investigate the maternal physiological and psychological factors for occurrence of cheilopalatognathus. One hundred ten mothers of cheilopalatognathus children who were scheduled for one-stage surgery were selected as a research group, and 110 mothers of normal children served as a normal control group at the same time. Trait Anxiety Inventory (T-AI), Life Events Scale (LES), Trait Coping Style Questionnaire (TCSQ), Type C Behavior Scale (CBS), adult Eysenck Personality Questionnaire (EPQ), and homemade general questionnaire survey were employed for the investigation. Compared with the control group, the scores for negative event tension value, anxiety, and depressive factors were higher in the study group (p < 0.05); while the scores for positive event tension value, intellect, optimism, and social support factors were lower (p < 0.05). Regression analysis found that physiological factors included were five: education, changes in body weight during pregnancy, the intake amount of milk and beans, and intake of healthcare products, and supplementary folic acid taken or not, while the psychological factors included were four: positive event stimulation, negative event stimulation, the amount of social support, as well as introvert and extrovert personalities. The study results suggest that pregnant women's physiological and psychological factors can cause changes in cheilopalatognathus incidence, which is expected to be guidance for healthcare during pregnancy, to prevent the occurrence of cheilopalatognathus.
Association Between Short Sleep Duration and Risk Behavior Factors in Middle School Students.
Owens, Judith; Wang, Guanghai; Lewin, Daniel; Skora, Elizabeth; Baylor, Allison
2017-01-01
To examine the association between self-reported sleep duration (SD) and peer/individual factors predictive of risky behaviors (risk behavior factors) in a large socioeconomically diverse school-based sample of early adolescents. Survey data collected from 10718 and 11240 eighth-grade students in 2010 and 2012, respectively, were analyzed. N/A. Self-reported school night SD was grouped as ≤4 hours, 5 hours, 6 hours, 7 hours, 8 hours, 9 hours, and ≥10 hours. Scores on 10 peer/individual risk behavior factor scales were dichotomized according to national eigth-grade cut points. The percentage of students reporting an "optimal" SD of 9 hours was 14.8% and 15.6% in 2010 and 2012, respectively; 45.6% and 46.1% reported <7 hours. Adjusted for covariates of gender, race, and SES, multilevel logistic regression results showed that odds ratios (ORs) for 9 of 10 risk factor scales increased with SD <7 hours, with a dose-response effect for each hour less sleep compared to an SD of 9 hours. For example, ORs for students sleeping <7 hours ranged from 1.3 (early initiation of antisocial behavior) to 1.8 (early initiation of drug use). The risk factor scale ORs for <5 hours SD ranged from 3.0 (sensation seeking) to 6.4 (gang involvement). Middle school students are at high risk of insufficient sleep; in particular, an SD <7 hours is associated with increased risk behavior factors. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Brown, G. J.; Haugan, H. J.; Mahalingam, K.; Grazulis, L.; Elhamri, S.
2015-01-01
The objective of this work is to establish molecular beam epitaxy (MBE) growth processes that can produce high quality InAs/GaInSb superlattice (SL) materials specifically tailored for very long wavelength infrared (VLWIR) detection. To accomplish this goal, several series of MBE growth optimization studies, using a SL structure of 47.0 Å InAs/21.5 Å Ga0.75In0.25Sb, were performed to refine the MBE growth process and optimize growth parameters. Experimental results demonstrated that our "slow" MBE growth process can consistently produce an energy gap near 50 meV. This is an important factor in narrow band gap SLs. However, there are other growth factors that also impact the electrical and optical properties of the SL materials. The SL layers are particularly sensitive to the anion incorporation condition formed during the surface reconstruction process. Since antisite defects are potentially responsible for the inherent residual carrier concentrations and short carrier lifetimes, the optimization of anion incorporation conditions, by manipulating anion fluxes, anion species, and deposition temperature, was systematically studied. Optimization results are reported in the context of comparative studies on the influence of the growth temperature on the crystal structural quality and surface roughness performed under a designed set of deposition conditions. The optimized SL samples produced an overall strong photoresponse signal with a relatively sharp band edge that is essential for developing VLWIR detectors. A quantitative analysis of the lattice strain, performed at the atomic scale by aberration corrected transmission electron microscopy, provided valuable information about the strain distribution at the GaInSb-on-InAs interface and in the InAs layers, which was important for optimizing the anion conditions.
Kook, Seung Hee; Varni, James W
2008-06-02
The Pediatric Quality of Life Inventory (PedsQL) is a child self-report and parent proxy-report instrument designed to assess health-related quality of life (HRQOL) in healthy and ill children and adolescents. It has been translated into over 70 international languages and proposed as a valid and reliable pediatric HRQOL measure. This study aimed to assess the psychometric properties of the Korean translation of the PedsQL 4.0 Generic Core Scales. Following the guidelines for linguistic validation, the original US English scales were translated into Korean and cognitive interviews were administered. The field testing responses of 1425 school children and adolescents and 1431 parents to the Korean version of PedsQL 4.0 Generic Core Scales were analyzed utilizing confirmatory factor analysis and the Rasch model. Consistent with studies using the US English instrument and other translation studies, score distributions were skewed toward higher HRQOL in a predominantly healthy population. Confirmatory factor analysis supported a four-factor and a second order-factor model. The analysis using the Rasch model showed that person reliabilities are low, item reliabilities are high, and the majority of items fit the model's expectation. The Rasch rating scale diagnostics showed that PedsQL 4.0 Generic Core Scales in general have the optimal number of response categories, but category 4 (almost always a problem) is somewhat problematic for the healthy school sample. The agreements between child self-report and parent proxy-report were moderate. The results demonstrate the feasibility, validity, item reliability, item fit, and agreement between child self-report and parent proxy-report of the Korean version of PedsQL 4.0 Generic Core Scales for school population health research in Korea. However, the utilization of the Korean version of the PedsQL 4.0 Generic Core Scales for healthy school populations needs to consider low person reliability, ceiling effects and cultural differences, and further validation studies on Korean clinical samples are required.
"Optimal" Size and Schooling: A Relative Concept.
ERIC Educational Resources Information Center
Swanson, Austin D.
Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…
Testing and Validating Gadget2 for GPUs
NASA Astrophysics Data System (ADS)
Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.
2013-01-01
We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.
Local and Systemic Factors and Implantation: what is the Evidence?
Fox, Chelsea; Morin, Scott; Jeong, Jae-Wook; Scott, Richard T.; Lessey, Bruce A
2016-01-01
Significant progress has been made in the understanding of embryonic competence and endometrial receptivity since the inception of Assisted Reproductive Technologies (ART). The endometrium is a highly dynamic tissue that plays a crucial role in the establishment and maintenance of normal pregnancy. In response to steroid sex hormones, the endometrium undergoes marked changes during the menstrual cycle that are critical for acceptance of the nascent embryo. There is also a wide body of literature on systemic factors that impact ART outcomes. Patient prognosis is impacted by an array of factors that tip the scales in her favor or against success. Recognizing the local and systemic factors will allow clinicians to better understand and optimize the maternal environment at the time of implantation. This review will address the current literature on endometrial and systemic factors related to impaired implantation and highlight recent advances in this area of reproductive medicine. PMID:26945096
Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra
2018-01-15
The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg -1 ), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.
Validation and diagnostic utility of the dementia rating scale in a mixed dementia population.
McCulloch, Katie; Collins, Robert L; Maestas, Kacey L; LeMaire, Ashley W; Pacheco, Vitor
2014-01-01
The Dementia Severity Rating Scale (DSRS), a previously validated caregiver-based measure assessing dementia severity, was recently revised to improve clarity. Our study aims included: (1) identifying the DSRS factor structure, (2) examining the relation between neuropsychological measures, the Mini-Mental State Examination, and clinical diagnoses with the DSRS, and (3) determining the clinical utility of the DSRS in a mixed clinical sample. A total of 270 veterans were referred to a cognitive disorders clinic at a VA medical center and completed neuropsychological, affective, and cognitive screening measures. Caregivers completed the DSRS. Principal components analysis identified a 2-factor solution. After controlling for age and education, memory and language were related to the Cognitive factor, whereas attention, processing speed, visuospatial processing, and executive functioning were related to both Cognitive and Self-Care factors. Neither factors correlated with depression. The total DSRS score was able to differentiate patients by the Mini-Mental State Examination scores and diagnoses of mild cognitive impairment and dementia (mixed vascular Alzheimer, vascular dementia, and Alzheimer disease). A cut-score >15 was optimal for detecting dementia in a mixed clinical sample (sensitivity=0.41, specificity=0.79), with a posttest probability of 74%. This study suggests that the DSRS improves detection of dementia and requires minimal effort to implement.
Naeem, Naghma; Muijtjens, Arno
2015-04-01
The psychological construct of emotional intelligence (EI), its theoretical models, measurement instruments and applications have been the subject of several research studies in health professions education. The objective of the current study was to investigate the factorial validity and reliability of a bilingual version of the Schutte Self Report Emotional Intelligence Scale (SSREIS) in an undergraduate Arab medical student population. The study was conducted during April-May 2012. A cross-sectional survey design was employed. A sample (n = 467) was obtained from undergraduate medical students belonging to the male and female medical college of King Saud University, Riyadh, Saudi Arabia. Exploratory and confirmatory factor analysis was performed using SPSS 16.0 and AMOS 4.0 statistical software to determine the factor structure. Reliability was determined using Cronbach's alpha statistics. The results obtained using an undergraduate Arab medical student sample supported a multidimensional; three factor structure of the SSREIS. The three factors are Optimism, Awareness-of-Emotions and Use-of-Emotions. The reliability (Cronbach's alpha) for the three subscales was 0.76, 0.72 and 0.55, respectively. Emotional intelligence is a multifactorial construct (three factors). The bilingual version of the SSREIS is a valid and reliable measure of trait emotional intelligence in an undergraduate Arab medical student population.
NASA Astrophysics Data System (ADS)
Kwon, Youngsang
As evidence of global warming continues to increase, being able to predict the relationship between forest growth rate and climate factors will be vital to maintain the sustainability and productivity of forests. Comprehensive analyses of forest primary production across the eastern US were conducted using remotely sensed MODIS and field-based FIA datasets. This dissertation primarily explored spatial patterns of gross and net carbon uptake in the eastern USA, and addressed three objectives. 1) Examine the use of pixel- and plot-scale screening variables to validate MODIS GPP predictions with Forest Inventory and Analysis (FIA) NPP measures. 2) Assess the net primary production (NPP) from MODIS and FIA at increasing levels of spatial aggregation using a hexagonal tiling system. 3) Assess the carbon use efficiency (CUE) calculated using a direct ratio of MODIS NPP to MODIS GPP and a standardized ratio of FIA NPP to MODIS GPP. The first objective was analyzed using total of 54,969 MODIS pixels and co-located FIA plots to validate MODIS GPP estimates. Eight SVs were used to test six hypotheses about the conditions under which MODIS GPP would be most strongly validated. SVs were assessed in terms of the tradeoff between improved relations and reduced number of samples. MODIS seasonal variation and FIA tree density were the two most efficient SVs followed by basic quality checks for each data set. The sequential application of SVs provided an efficient dataset of 17,090 co-located MODIS pixels and FIA plots, that raised the Pearson's correlation coefficient from 0.01 for the complete dataset of 54,969 plots to 0.48 for this screened subset of 17,090 plots. The second objective was addressed by aggregating data over increasing spatial extents so as to not lose plot- and pixel-level information. These data were then analyzed to determine the optimal scale with which to represent the spatial pattern of NPP. The results suggested an optimal scale of 390 km2. At that scale MODIS and FIA were most strongly correlated while maximizing the number of observation. The maps conveyed both local-scale spatial structure from FIA and broad-scale climatic trends from MODIS. The third objective examined whether carbon use efficiency (CUE) was constant or variable in relation to forest types, and to geographic and climatic variables. The results indicated that while CUEs exhibited unclear patterns by forest types, CUEs are variable to other environmental variables. CUEs are most strongly related to the climatic factors of precipitation followed by temperature. More complex and weaker relationships were found for the geographic factors of latitude and altitude, as they reflected a combination of phenomenological driving forces. The results of the three objectives will help us to identify factors that control carbon cycles and to quantify forest productivity. This will help improve our knowledge about how forest primary productivity may change in relation to ongoing climate change.
Polish Adaptation of the Psychache Scale by Ronald Holden and Co-workers.
Chodkiewicz, Jan; Miniszewska, Joanna; Strzelczyk, Dorota; Gąsior, Krzysztof
2017-04-30
The conducted study was aimed at making a Polish adaptation of the Scale of Psychache by Ronald Holden and co-workers. The scale is a self-assessment method which comprises 13 statements and is designed to assess subjectively experienced psychological pain. 300 persons were examined - undergraduates and postgraduates of the University of Lodz and the Technical University of Lodz. The group of the study participants consisted of 185 women and 115 men. Moreover, there were examined 150 alcohol addicted men, 50 co-addicted women and 50 major depressive episode (MDE) patients. The Polish version of the Scale is a reliable and valid tool. The exploratory and confirmatory factor analysis has proved the existence of one factor. The internal consistency, assessed on the basis of Cronbach's alpha, equalled 0.93. The method displays positive and statistically significant relationships to levels of depression, hopelessness, anxiety, anhedonia and negative relations to levels of optimism, life satisfaction, and positive orientation. Alcohol addicted men with presently diagnosed suicidal thoughts were characterised by a significantly higher level of psychological pain as compared to alcoholics without such thoughts. A higher level of psychache was also reported in people with depression who have a history of attempted suicide compared with those who have not attempted suicide. The effect of the conducted adaptation works on the Psychache Scale speaks for recommending the method for scientific research and use in therapeutic practice.
Morgenstern, Lewis B.; Sánchez, Brisa N.; Skolarus, Lesli E.; Garcia, Nelda; Risser, Jan M.H.; Wing, Jeffrey J.; Smith, Melinda A.; Zahuranec, Darin B.; Lisabeth, Lynda D.
2011-01-01
Background and Purpose We sought to describe the association of spirituality, optimism, fatalism and depressive symptoms with initial stroke severity, stroke recurrence and post-stroke mortality. Methods Stroke cases June 2004–December 2008 were ascertained in Nueces County, Texas. Patients without aphasia were queried on their recall of depressive symptoms, fatalism, optimism, and non-organizational spirituality before stroke using validated scales. The association between scales and stroke outcomes was studied using multiple linear regression with log-transformed NIHSS and Cox proportional hazards regression for recurrence and mortality. Results 669 patients participated, 48.7% were women. In fully adjusted models, an increase in fatalism from the first to third quartile was associated with all-cause mortality (HR=1.41, 95%CI: 1.06, 1.88), marginally associated with risk of recurrence (HR=1.35, 95%CI: 0.97, 1.88), but not stroke severity. Similarly, an increase in depressive symptoms was associated with increased mortality (HR=1.32, 95%CI: 1.02, 1.72), marginally associated with stroke recurrence (HR=1.22, CI: 0.93, 1.62), and with a 9.0% increase in stroke severity (95%CI: 0.01, 18.0). Depressive symptoms altered the fatalism-mortality association such that the association of fatalism and mortality was more pronounced for patients reporting no depressive symptoms. Neither spirituality nor optimism conferred a significant effect on stroke severity, recurrence or mortality. Conclusions Among patients who have already had a stroke, self-described pre-stroke depressive symptoms and fatalism, but not optimism or spirituality, are associated with increased risk of stroke recurrence and mortality. Unconventional risk factors may explain some of the variability in stroke outcomes observed in populations, and may be novel targets for intervention. PMID:21940963
ERIC Educational Resources Information Center
Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.
2008-01-01
This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…
Resource allocation for epidemic control in metapopulations.
Ndeffo Mbah, Martial L; Gilligan, Christopher A
2011-01-01
Deployment of limited resources is an issue of major importance for decision-making in crisis events. This is especially true for large-scale outbreaks of infectious diseases. Little is known when it comes to identifying the most efficient way of deploying scarce resources for control when disease outbreaks occur in different but interconnected regions. The policy maker is frequently faced with the challenge of optimizing efficiency (e.g. minimizing the burden of infection) while accounting for social equity (e.g. equal opportunity for infected individuals to access treatment). For a large range of diseases described by a simple SIRS model, we consider strategies that should be used to minimize the discounted number of infected individuals during the course of an epidemic. We show that when faced with the dilemma of choosing between socially equitable and purely efficient strategies, the choice of the control strategy should be informed by key measurable epidemiological factors such as the basic reproductive number and the efficiency of the treatment measure. Our model provides new insights for policy makers in the optimal deployment of limited resources for control in the event of epidemic outbreaks at the landscape scale.
NASA Astrophysics Data System (ADS)
Arslan, Hakan; Algül, Öztekin
2008-06-01
The room temperature attenuated total reflection Fourier transform infrared spectrum of the 2-(4-methoxyphenyl)-1 H-benzo[ d]imidazole has been recorded with diamond/ZnSe prism. The conformational behaviour, structural stability of optimized geometry, frequency and intensity of the vibrational bands of the title compound were investigated by utilizing ab initio calculations with 6-311G** basis set at HF, B3LYP, BLYP, B3PW91 and mPW1PW91 levels. The harmonic vibrational frequencies were calculated and scaled values have been compared with experimental IR spectrum. The observed and the calculated frequencies are found to be in good agreement. The theoretical vibrational spectra of the title compound were interpreted by means of potential energy distributions using VEDA 4 program. Furthermore, the optimal uniform scaling factors calculated for the title compound are 0.9120, 0.9596, 0.9660, 0.9699, and 0.9993 for HF, mPW1PW91, B3PW91, B3LYP and BLYP methods, respectively.
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Bryan, Craig J; Kanzler, Kathryn E; Grieser, Emily; Martinez, Annette; Allison, Sybil; McGeary, Donald
2017-03-01
Research in psychiatric outpatient and inpatient populations supports the utility of the Suicide Cognitions Scale (SCS) as an indicator of current and future risk for suicidal thoughts and behaviors. Designed to assess suicide-specific thoughts and beliefs, the SCS has yet to be evaluated among chronic pain patients, a group with elevated risk for suicide. The purpose of the present study was to develop and test a shortened version of the SCS (the SCS-S). A total of 228 chronic pain patients completed a battery of self-report surveys before or after a scheduled appointment. Three outpatient medical clinics (pain medicine, orofacial pain, and clinical health psychology). Confirmatory factor analysis, multivariate regression, and graded item response theory model analyses. Results of the CFAs suggested that a 3-factor solution was optimal. A shortened 9-item scale was identified based on the results of graded item response theory model analyses. Correlation and multivariate analyses supported the construct and incremental validity of the SCS-S. Results support the reliability and validity of the SCS-S among chronic pain patients, and suggest the scale may be a useful method for identifying high-risk patients in medical settings. © 2016 World Institute of Pain.
Polansky, Leo; Douglas-Hamilton, Iain; Wittemyer, George
2013-01-01
Adaptive movement behaviors allow individuals to respond to fluctuations in resource quality and distribution in order to maintain fitness. Classically, studies of the interaction between ecological conditions and movement behavior have focused on such metrics as travel distance, velocity, home range size or patch occupancy time as the salient metrics of behavior. Driven by the emergence of very regular high frequency data, more recently the importance of interpreting the autocorrelation structure of movement as a behavioral metric has become apparent. Studying movement of a free ranging African savannah elephant population, we evaluated how two movement metrics, diel displacement (DD) and movement predictability (MP - the degree of autocorrelated movement activity at diel time scales), changed in response to variation in resource availability as measured by the Normalized Difference Vegetation Index. We were able to capitalize on long term (multi-year) yet high resolution (hourly) global positioning system tracking datasets, the sample size of which allows robust analysis of complex models. We use optimal foraging theory predictions as a framework to interpret our results, in particular contrasting the behaviors across changes in social rank and resource availability to infer which movement behaviors at diel time scales may be optimal in this highly social species. Both DD and MP increased with increasing forage availability, irrespective of rank, reflecting increased energy expenditure and movement predictability during time periods of overall high resource availability. However, significant interactions between forage availability and social rank indicated a stronger response in DD, and a weaker response in MP, with increasing social status. Relative to high ranking individuals, low ranking individuals expended more energy and exhibited less behavioral movement autocorrelation during lower forage availability conditions, likely reflecting sub-optimal movement behavior. Beyond situations of contest competition, rank status appears to influence the extent to which individuals can modify their movement strategies across periods with differing forage availability. Large-scale spatiotemporal resource complexity not only impacts fine scale movement and optimal foraging strategies directly, but likely impacts rates of inter- and intra-specific interactions and competition resulting in socially based movement responses to ecological dynamics.
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Vasilev, Nikolay; Schmitz, Christian; Grömping, Ulrike; Fischer, Rainer; Schillberg, Stefan
2014-01-01
A large-scale statistical experimental design was used to determine essential cultivation parameters that affect biomass accumulation and geraniol production in transgenic tobacco (Nicotiana tabacum cv. Samsun NN) cell suspension cultures. The carbohydrate source played a major role in determining the geraniol yield and factors such as filling volume, inoculum size and light were less important. Sucrose, filling volume and inoculum size had a positive effect on geraniol yield by boosting growth of plant cell cultures whereas illumination of the cultures stimulated the geraniol biosynthesis. We also found that the carbohydrates sucrose and mannitol showed polarizing effects on biomass and geraniol accumulation. Factors such as shaking frequency, the presence of conditioned medium and solubilizers had minor influence on both plant cell growth and geraniol content. When cells were cultivated under the screened conditions for all the investigated factors, the cultures produced ∼5.2 mg/l geraniol after 12 days of cultivation in shaking flasks which is comparable to the yield obtained in microbial expression systems. Our data suggest that industrial experimental designs based on orthogonal arrays are suitable for the selection of initial cultivation parameters prior to the essential medium optimization steps. Such designs are particularly beneficial in the early optimization steps when many factors must be screened, increasing the statistical power of the experiments without increasing the demand on time and resources. PMID:25117009
Vasilev, Nikolay; Schmitz, Christian; Grömping, Ulrike; Fischer, Rainer; Schillberg, Stefan
2014-01-01
A large-scale statistical experimental design was used to determine essential cultivation parameters that affect biomass accumulation and geraniol production in transgenic tobacco (Nicotiana tabacum cv. Samsun NN) cell suspension cultures. The carbohydrate source played a major role in determining the geraniol yield and factors such as filling volume, inoculum size and light were less important. Sucrose, filling volume and inoculum size had a positive effect on geraniol yield by boosting growth of plant cell cultures whereas illumination of the cultures stimulated the geraniol biosynthesis. We also found that the carbohydrates sucrose and mannitol showed polarizing effects on biomass and geraniol accumulation. Factors such as shaking frequency, the presence of conditioned medium and solubilizers had minor influence on both plant cell growth and geraniol content. When cells were cultivated under the screened conditions for all the investigated factors, the cultures produced ∼ 5.2 mg/l geraniol after 12 days of cultivation in shaking flasks which is comparable to the yield obtained in microbial expression systems. Our data suggest that industrial experimental designs based on orthogonal arrays are suitable for the selection of initial cultivation parameters prior to the essential medium optimization steps. Such designs are particularly beneficial in the early optimization steps when many factors must be screened, increasing the statistical power of the experiments without increasing the demand on time and resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford K.; Ortega, Jesus D.; Christian, Joshua Mark
Novel designs to increase light trapping and thermal efficiency of concentrating solar receivers at multiple length scales have been conceived, designed, and tested. The fractal-like geometries and features are introduced at both macro (meters) and meso (millimeters to centimeters) scales. Advantages include increased solar absorptance, reduced thermal emittance, and increased thermal efficiency. Radial and linear structures at the meso (tube shape and geometry) and macro (total receiver geometry and configuration) scales redirect reflected solar radiation toward the interior of the receiver for increased absorptance. Hotter regions within the interior of the receiver can reduce thermal emittance due to reduced localmore » view factors to the environment, and higher concentration ratios can be employed with similar surface irradiances to reduce the effective optical aperture, footprint, and thermal losses. Coupled optical/fluid/thermal models have been developed to evaluate the performance of these designs relative to conventional designs. Modeling results showed that fractal-like structures and geometries can increase the effective solar absorptance by 5 – 20% and the thermal efficiency by several percentage points at both the meso and macro scales, depending on factors such as intrinsic absorptance. Meso-scale prototypes were fabricated using additive manufacturing techniques, and a macro-scale bladed receiver design was fabricated using Inconel 625 tubes. On-sun tests were performed using the solar furnace and solar tower at the National Solar Thermal Test facility. The test results demonstrated enhanced solar absorptance and thermal efficiency of the fractal-like designs.« less
Flow rate of transport network controls uniform metabolite supply to tissue
Meigel, Felix J.
2018-01-01
Life and functioning of higher organisms depends on the continuous supply of metabolites to tissues and organs. What are the requirements on the transport network pervading a tissue to provide a uniform supply of nutrients, minerals or hormones? To theoretically answer this question, we present an analytical scaling argument and numerical simulations on how flow dynamics and network architecture control active spread and uniform supply of metabolites by studying the example of xylem vessels in plants. We identify the fluid inflow rate as the key factor for uniform supply. While at low inflow rates metabolites are already exhausted close to flow inlets, too high inflow flushes metabolites through the network and deprives tissue close to inlets of supply. In between these two regimes, there exists an optimal inflow rate that yields a uniform supply of metabolites. We determine this optimal inflow analytically in quantitative agreement with numerical results. Optimizing network architecture by reducing the supply variance over all network tubes, we identify patterns of tube dilation or contraction that compensate sub-optimal supply for the case of too low or too high inflow rate. PMID:29720455
ERIC Educational Resources Information Center
Colligan, Robert C.; And Others
1994-01-01
Developed bipolar Minnesota Multiphasic Personality Inventory (MMPI) Optimism-Pessimism (PSM) scale based on results on Content Analysis of Verbatim Explanation applied to MMPI. Reliability and validity indices show that PSM scale is highly accurate and consistent with Seligman's theory that pessimistic explanatory style predicts increased…
Gitlin, Laura N.; Parisi, Jeanine; Huang, Jin; Winter, Laraine; Roth, David L.
2016-01-01
Purpose of study: Examine psychometric properties of Lawton’s Valuation of Life (VOL) scale, a measure of an older adults’ assessment of the perceived value of their lives; and whether ratings differ by race (White, Black/African American) and sex. Design and Methods: The 13-item VOL scale was administered at baseline in 2 separate randomized trials (Advancing Better Living for Elders, ABLE; Get Busy Get Better, GBGB) for a total of 527 older adults. Principal component analyses were applied to a subset of ABLE data (subsample 1) and confirmatory factor analyses were conducted on remaining data (subsample 2 and GBGB). Once the factor structure was identified and confirmed, 2 subscales were created, corresponding to optimism and engagement. Convergent validity of total and subscale scores were examined using measures of depressive symptoms, social support, control-oriented strategies, mastery, and behavioral activation. For discriminant validity, indices of health status, physical function, financial strain, cognitive status, and number of falls were examined. Results: Trial samples (ABLE vs. GBGB) differed by age, race, marital status, education, and employment. Principal component analysis on ABLE subsample 1 (n = 156) yielded two factors subsequently confirmed in confirmatory factor analyses on ABLE subsample 2 (n = 163) and GBGB sample (N = 208) separately. Adequate fit was found for the 2-factor model. Correlational analyses supported strong convergent and discriminant validity. Some statistically significant race and sex differences in subscale scores were found. Implications: VOL measures subjective appraisals of perceived value of life. Consisting of two interrelated subscales, it offers an efficient approach to ascertain personal attributions. PMID:26874189
Dynamic Factorization in Large-Scale Optimization
1989-06-01
side column and the botton (cost) ro . Because of the symmetric nature of the mutual primal-dual nijethod. a sensii 1 approacd s to al’ocal, a sinle...denimonstrates strong numnrical statii’, aw x, l l ii at,, r ,parsity of {" Several nw’tlhods have been proposed (Bartels and Golub [1969], Forrest and Tom...formerly) disjoint components which have been merged by the change in the representation of the dynamic singletons, 130 10 Q’ Q9 U> C) lt lot 001 0
Optimizing rice yields while minimizing yield-scaled global warming potential.
Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A
2014-05-01
To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. © 2013 John Wiley & Sons Ltd.
Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana
2015-09-28
The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.
Bees for development: Brazilian survey reveals how to optimize stingless beekeeping.
Jaffé, Rodolfo; Pope, Nathaniel; Torres Carvalho, Airton; Madureira Maia, Ulysses; Blochtein, Betina; de Carvalho, Carlos Alfredo Lopes; Carvalho-Zilse, Gislene Almeida; Freitas, Breno Magalhães; Menezes, Cristiano; Ribeiro, Márcia de Fátima; Venturieri, Giorgio Cristino; Imperatriz-Fonseca, Vera Lucia
2015-01-01
Stingless bees are an important asset to assure plant biodiversity in many natural ecosystems, and fulfill the growing agricultural demand for pollination. However, across developing countries stingless beekeeping remains an essentially informal activity, technical knowledge is scarce, and management practices lack standardization. Here we profited from the large diversity of stingless beekeepers found in Brazil to assess the impact of particular management practices on productivity and economic revenues from the commercialization of stingless bee products. Our study represents the first large-scale effort aiming at optimizing stingless beekeeping for honey/colony production based on quantitative data. Survey data from 251 beekeepers scattered across 20 Brazilian States revealed the influence of specific management practices and other confounding factors over productivity and income indicators. Specifically, our results highlight the importance of teaching beekeepers how to inspect and feed their colonies, how to multiply them and keep track of genetic lineages, how to harvest and preserve the honey, how to use vinegar traps to control infestation by parasitic flies, and how to add value by labeling honey containers. Furthermore, beekeeping experience and the network of known beekeepers were found to be key factors influencing productivity and income. Our work provides clear guidelines to optimize stingless beekeeping and help transform the activity into a powerful tool for sustainable development.
Khanna, Swati; Goyal, Arun; Moholkar, Vijayanand S
2013-01-01
This article addresses the issue of effect of fermentation parameters for conversion of glycerol (in both pure and crude form) into three value-added products, namely, ethanol, butanol, and 1,3-propanediol (1,3-PDO), by immobilized Clostridium pasteurianum and thereby addresses the statistical optimization of this process. The analysis of effect of different process parameters such as agitation rate, fermentation temperature, medium pH, and initial glycerol concentration indicated that medium pH was the most critical factor for total alcohols production in case of pure glycerol as fermentation substrate. On the other hand, initial glycerol concentration was the most significant factor for fermentation with crude glycerol. An interesting observation was that the optimized set of fermentation parameters was found to be independent of the type of glycerol (either pure or crude) used. At optimum conditions of agitation rate (200 rpm), initial glycerol concentration (25 g/L), fermentation temperature (30°C), and medium pH (7.0), the total alcohols production was almost equal in anaerobic shake flasks and 2-L bioreactor. This essentially means that at optimum process parameters, the scale of operation does not affect the output of the process. The immobilized cells could be reused for multiple cycles for both pure and crude glycerol fermentation.
Rational decision-making in inhibitory control.
Shenoy, Pradeep; Yu, Angela J
2011-01-01
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability.
Rational Decision-Making in Inhibitory Control
Shenoy, Pradeep; Yu, Angela J.
2011-01-01
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability. PMID:21647306
Bees for Development: Brazilian Survey Reveals How to Optimize Stingless Beekeeping
Jaffé, Rodolfo; Pope, Nathaniel; Carvalho, Airton Torres; Maia, Ulysses Madureira; Blochtein, Betina; de Carvalho, Carlos Alfredo Lopes; Carvalho-Zilse, Gislene Almeida; Freitas, Breno Magalhães; Menezes, Cristiano; de Fátima Ribeiro, Márcia; Venturieri, Giorgio Cristino; Imperatriz-Fonseca, Vera Lucia
2015-01-01
Stingless bees are an important asset to assure plant biodiversity in many natural ecosystems, and fulfill the growing agricultural demand for pollination. However, across developing countries stingless beekeeping remains an essentially informal activity, technical knowledge is scarce, and management practices lack standardization. Here we profited from the large diversity of stingless beekeepers found in Brazil to assess the impact of particular management practices on productivity and economic revenues from the commercialization of stingless bee products. Our study represents the first large-scale effort aiming at optimizing stingless beekeeping for honey/colony production based on quantitative data. Survey data from 251 beekeepers scattered across 20 Brazilian States revealed the influence of specific management practices and other confounding factors over productivity and income indicators. Specifically, our results highlight the importance of teaching beekeepers how to inspect and feed their colonies, how to multiply them and keep track of genetic lineages, how to harvest and preserve the honey, how to use vinegar traps to control infestation by parasitic flies, and how to add value by labeling honey containers. Furthermore, beekeeping experience and the network of known beekeepers were found to be key factors influencing productivity and income. Our work provides clear guidelines to optimize stingless beekeeping and help transform the activity into a powerful tool for sustainable development. PMID:25826402
NASA Astrophysics Data System (ADS)
Song, Xingliang; Sha, Pengfei; Fan, Yuanyuan; Jiang, R.; Zhao, Jiangshan; Zhou, Yi; Yang, Junhong; Xiong, Guangliang; Wang, Yu
2018-02-01
Due to complex kinetics of formation and loss mechanisms, such as ion-ion recombination reaction, neutral species harpoon reaction, excited state quenching and photon absorption, as well as their interactions, the performance behavior of different laser gas medium parameters for excimer laser varies greatly. Therefore, the effects of gas composition and total gas pressure on excimer laser performance attract continual research studies. In this work, orthogonal experimental design (OED) is used to investigate quantitative and qualitative correlations between output laser energy characteristics and gas medium parameters for an ArF excimer laser with plano-plano optical resonator operation. Optimized output laser energy with good pulse to pulse stability can be obtained effectively by proper selection of the gas medium parameters, which makes the most of the ArF excimer laser device. Simple and efficient method for gas medium optimization is proposed and demonstrated experimentally, which provides a global and systematic solution. By detailed statistical analysis, the significance sequence of relevant parameter factors and the optimized composition for gas medium parameters are obtained. Compared with conventional route of varying single gas parameter factor sequentially, this paper presents a more comprehensive way of considering multivariables simultaneously, which seems promising in striking an appropriate balance among various complicated parameters for power scaling study of an excimer laser.
Sunderland, Matthew; Slade, Tim; Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Kramer, Mark D
2017-07-01
The development of the Externalizing Spectrum Inventory (ESI) was motivated by the need to comprehensively assess the interrelated nature of externalizing psychopathology and personality using an empirically driven framework. The ESI measures 23 theoretically distinct yet related unidimensional facets of externalizing, which are structured under 3 superordinate factors representing general externalizing, callous aggression, and substance abuse. One limitation of the ESI is its length at 415 items. To facilitate the use of the ESI in busy clinical and research settings, the current study sought to examine the efficiency and accuracy of a computerized adaptive version of the ESI. Data were collected over 3 waves and totaled 1,787 participants recruited from undergraduate psychology courses as well as male and female state prisons. A series of 6 algorithms with different termination rules were simulated to determine the efficiency and accuracy of each test under 3 different assumed distributions. Scores generated using an optimal adaptive algorithm evidenced high correlations (r > .9) with scores generated using the full ESI, brief ESI item-based factor scales, and the 23 facet scales. The adaptive algorithms for each facet administered a combined average of 115 items, a 72% decrease in comparison to the full ESI. Similarly, scores on the item-based factor scales of the ESI-brief form (57 items) were generated using on average of 17 items, a 70% decrease. The current study successfully demonstrates that an adaptive algorithm can generate similar scores for the ESI and the 3 item-based factor scales using a fraction of the total item pool. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D P; Ritts, W D; Wharton, S
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less
Temperature Scaling Law for Quantum Annealing Optimizers.
Albash, Tameem; Martin-Mayor, Victor; Hen, Itay
2017-09-15
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.
Anderson, Jeffrey R; Barrett, Steven F
2009-01-01
Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.
Validation of the Basic Psychological Needs in Exercise Scale in a Portuguese sample.
Moutão, João Miguel Raimundo Peres; Serra, Luis Filipe Cid; Alves, José Augusto Marinho; Leitão, José Carlos; Vlachopoulos, Symeon P
2012-03-01
In line with self-determination theory (SDT: Deci & Ryan, 1985, 2002) the satisfaction of the basic psychological needs for autonomy, competence, and relatedness has been identified as an important predictor of behavior and optimal functioning in various contexts including exercise. The lack of a valid and reliable instrument to assess the extent to which these needs are fulfilled among Portuguese exercise participants limits the evaluation of causal links proposed by SDT in the Portuguese exercise context. The aim of the present study was to translate into Portuguese and validate the Basic Psychological Needs in Exercise Scale (BPNES: Vlachopoulos & Michailidou, 2006). Using data from 522 exercise participants the findings provided evidence of strong internal consistency of the translated BPNES subscales while confirmatory factor analysis supported a good fit of the correlated 3-factor model to the data. The present findings support the use of the translated into Portuguese BPNES to assess the extent of basic psychological need fulfilment among Portuguese exercise participants.
NASA Astrophysics Data System (ADS)
Kashinski, D. O.; Nelson, R. G.; Chase, G. M.; di Nallo, O. E.; Byrd, E. F. C.
2016-05-01
We are investigating the accuracy of theoretical models used to predict the visible, ultraviolet, and infrared spectra, as well as other properties, of product materials ejected from the muzzle of currently fielded systems. Recent advances in solid propellants has made the management of muzzle signature (flash) a principle issue in weapons development across the calibers. A priori prediction of the electromagnetic spectra of formulations will allow researchers to tailor blends that yield desired signatures and determine spectrographic detection ranges. Quantum chemistry methods at various levels of sophistication have been employed to optimize molecular geometries, compute unscaled harmonic frequencies, and determine the optical spectra of specific gas-phase species. Electronic excitations are being computed using Time Dependent Density Functional Theory (TD-DFT). Calculation of approximate global harmonic frequency scaling factors for specific DFT functionals is also in progress. A full statistical analysis and reliability assessment of computational results is currently underway. Work supported by the ARL, DoD-HPCMP, and USMA.
NASA Astrophysics Data System (ADS)
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-11-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.
A Mesoscopic Electromechanical Theory of Ferroelectric Films and Ceramics
NASA Astrophysics Data System (ADS)
Li, Jiangyu; Bhattacharya, Kaushik
2002-08-01
We present a multi-scale modelling framework to predict the effective electromechanical behavior of ferroelectric ceramics and thin films. This paper specifically focuses on the mesoscopic scale and models the effects of domains and domain switching taking into account intergranular constraints. Starting from the properties of the single crystal and the pre-poling granular texture, the theory predicts the domain patterns, the post-poling texture, the saturation polarization, saturation strain and the electromechanical moduli. We demonstrate remarkable agreement with experimental data. The theory also explains the superior electromechanical property of PZT at the morphotropic phase boundary. The paper concludes with the application of the theory to predict the optimal texture for enhanced electromechanical coupling factors and high-strain actuation in selected materials.
Development of a large-scale transportation optimization course.
DOT National Transportation Integrated Search
2011-11-01
"In this project, a course was developed to introduce transportation and logistics applications of large-scale optimization to graduate students. This report details what : similar courses exist in other universities, and the methodology used to gath...
Large-scale linear programs in planning and prediction.
DOT National Transportation Integrated Search
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Scaled Heavy-Ball Acceleration of the Richardson-Lucy Algorithm for 3D Microscopy Image Restoration.
Wang, Hongbin; Miller, Paul C
2014-02-01
The Richardson-Lucy algorithm is one of the most important in image deconvolution. However, a drawback is its slow convergence. A significant acceleration was obtained using the technique proposed by Biggs and Andrews (BA), which is implemented in the deconvlucy function of the image processing MATLAB toolbox. The BA method was developed heuristically with no proof of convergence. In this paper, we introduce the heavy-ball (H-B) method for Poisson data optimization and extend it to a scaled H-B method, which includes the BA method as a special case. The method has a proof of the convergence rate of O(K(-2)), where k is the number of iterations. We demonstrate the superior convergence performance, by a speedup factor of five, of the scaled H-B method on both synthetic and real 3D images.
Psychometric Properties of the German Version of the Health Regulatory Focus Scale
Schmalbach, Bjarne; Spina, Roy; Steffens-Guerra, Ileana; Franke, Gabriele H.; Kliem, Sören; Michaelides, Michalis P.; Hinz, Andreas; Zenger, Markus
2017-01-01
The Health Regulatory Focus Scale (HRFS) is a short scale which measures an individual's prevention and promotion focus in a health-specific context. The main objective of this study was to examine the psychometric properties of the newly translated German version of the HRFS. Reliability and item characteristics were found to be satisfactory. Validity of both subscales toward other psychological constructs including behavioral approach and avoidance, core self-evaluations, optimism, pessimism, neuroticism, as well as several measures of physical and mental health was shown. In addition, invariance of the measure across age and gender groups was shown. Exploratory as well as confirmatory factor analyses clearly indicated a two-factorial structure with a moderate correlation between the two latent constructs. Differences in health promotion and prevention focus between socio-demographic groups are discussed. The HRFS is found to be a valid and reliable instrument for the assessment of regulatory focus in health-related environments. PMID:29184528
Lessons Learned from Southeast Asian Floods
NASA Astrophysics Data System (ADS)
Osti, R.; Tanaka, S.
2009-04-01
At certain scales, flood has always been the lifeline of many people from Southeast Asian countries. People are traditionally accustomed to living with such floods and their livelihood is adjusted accordingly to optimize the benefits from the floods. However, large scale flood occasionally turns into the disaster and causes massive destruction not only in terms of human causalities but also damage to economic, ecological and social harmonies in the region. Although economic growth is prevailing in a relative term, the capacity of people to cope with such extreme events is weakening therefore the flood disaster risk is increasing in time. Recent examples of flood disaster in the region clearly show the increasing severity of disaster impact. This study reveals that there are many factors, which directly or indirectly influence the change. This paper considers the most prominent natural and socio-economic factors and analyzes their trend with respect to flood disasters in each country's context. A regional scale comparative analysis further helps to exchange the know how and to determine what kind of strategy and policy are lacking to manage the floods in a long run. It is also helpful in identifying the critical sectors that should be addressed first to mitigate the potential damage from the floods.
Herrera López, Mauricio; Romera Félix, Eva M; Ortega Ruiz, Rosario; Gómez Ortiz, Olga
2016-01-01
The first objective of this study was to adapt and test the psychometric properties of the Social Achievement Goal Scale (Ryan & Shim, 2006) in Spanish adolescent students. The second objective sought to analyse the influence of social goals, normative adjustment and self-perception of social efficacy on social adjustment among peers. A total of 492 adolescents (54.1% females) attending secondary school (12-17 years; M = 13.8, SD = 1.16) participated in the study. Confirmatory factor analysis and structural equation modelling were performed. The validation confirmed the three-factor structure of the original scale: social development goals, social demonstration-approach goals and social demonstration-avoidance goals. The structural equation model indicated that social development goals and normative adjustment have a direct bearing on social adjustment, whereas the social demonstration-approach goals (popularity) and self-perception of social efficacy with peers and teachers exert an indirect influence. The Spanish version of the Social Achievement Goal Scale (Ryan & Shim, 2006) yielded optimal psychometric properties. Having a positive motivational pattern, engaging in norm-adjusted behaviours and perceiving social efficacy with peers is essential to improving the quality of interpersonal relationships.
2014-01-01
Background Discharge of grey wastewater into the ecological system causes the negative impact effect on receiving water bodies. Methods In this present study, electrocoagulation process (EC) was investigated to treat grey wastewater under different operating conditions such as initial pH (4–8), current density (10–30 mA/cm2), electrode distance (4–6 cm) and electrolysis time (5–25 min) by using stainless steel (SS) anode in batch mode. Four factors with five levels Box-Behnken response surface design (BBD) was employed to optimize and investigate the effect of process variables on the responses such as total solids (TS), chemical oxygen demand (COD) and fecal coliform (FC) removal. Results The process variables showed significant effect on the electrocoagulation treatment process. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to study the electrocoagulation process statistically. The optimal operating conditions were found to be: initial pH of 7, current density of 20 mA/cm2, electrode distance of 5 cm and electrolysis time of 20 min. Conclusion These results indicated that EC process can be scale up in large scale level to treat grey wastewater with high removal efficiency of TS, COD and FC. PMID:24410752
Monitoring My Multiple Sclerosis
Namey, Marie; Halper, June
2011-01-01
Optimal health of people with multiple sclerosis (MS) can be promoted by patients' sharing of health information gained through periodic self-monitoring with their health-care providers. The purpose of this study was to develop a valid and reliable self-administered scale to obtain information about MS patients' health status and the impact of the disease on their daily lives. We named this scale “Monitoring My Multiple Sclerosis” (MMMS). A cross-sectional survey was conducted of 171 MS patients who completed the MMMS and Patient-Determined Disease Steps (PDDS) scales and provided information on their MS disease classification and demographic characteristics. Data analysis included several parametric procedures. Factor analysis of the 26-item MMMS resulted in four factors with satisfactory α reliability coefficients for the total scale (0.90) and factored subscales: Physical (0.85), Relationships (0.80), Energy (0.70), and Cognitive/Mental (0.67). Analysis of variance demonstrated that the total scale and the Physical subscale, but not the Relationships subscale, showed significantly worse functioning for patients with either moderate or severe disability as measured by the PDDS than for patients with mild disability (P < .001). The Cognitive/Mental subscale showed significantly worse functioning for patients with moderate disability than for patients with mild disability (P < .05). However, the Energy subscale showed significantly worse functioning among moderately disabled patients than among severely disabled patients (P < .01). Independent t tests demonstrated that patients classified as having secondary progressive multiple sclerosis had significantly worse scores on the total MMMS (P < .05) and the Physical subscale (P < .001) than those classified as having relapsing-remitting multiple sclerosis. The MMMS demonstrated satisfactory reliability and validity and is recommended for use by MS patients and their health-care providers as a mechanism to promote the sharing of health information, to the benefit of both patients and providers. PMID:24453717
Iterative initial condition reconstruction
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias
2017-07-01
Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2014-05-01
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale
Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...
2017-01-26
Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.
Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less
GPU accelerated particle visualization with Splotch
NASA Astrophysics Data System (ADS)
Rivi, M.; Gheller, C.; Dykes, T.; Krokos, M.; Dolag, K.
2014-07-01
Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organization and classification of particles. We deploy a reference cosmological simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimizations and exploitation of hybrid systems and emerging accelerators.
Lanczos eigensolution method for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1991-01-01
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.
2018-06-01
The necessity to find the global optimum of multiextremal functions arises in many applied problems where finding local solutions is insufficient. One of the desirable properties of global optimization methods is strong homogeneity meaning that a method produces the same sequences of points where the objective function is evaluated independently both of multiplication of the function by a scaling constant and of adding a shifting constant. In this paper, several aspects of global optimization using strongly homogeneous methods are considered. First, it is shown that even if a method possesses this property theoretically, numerically very small and large scaling constants can lead to ill-conditioning of the scaled problem. Second, a new class of global optimization problems where the objective function can have not only finite but also infinite or infinitesimal Lipschitz constants is introduced. Third, the strong homogeneity of several Lipschitz global optimization algorithms is studied in the framework of the Infinity Computing paradigm allowing one to work numerically with a variety of infinities and infinitesimals. Fourth, it is proved that a class of efficient univariate methods enjoys this property for finite, infinite and infinitesimal scaling and shifting constants. Finally, it is shown that in certain cases the usage of numerical infinities and infinitesimals can avoid ill-conditioning produced by scaling. Numerical experiments illustrating theoretical results are described.
NASA Astrophysics Data System (ADS)
El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel
This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.
Optimization of cell seeding in a 2D bio-scaffold system using computational models.
Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong
2017-05-01
The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm
NASA Astrophysics Data System (ADS)
Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun
2017-10-01
A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.
OPTIMIZING BMP PLACEMENT AT WATERSHED-SCALE USING SUSTAIN
Watershed and stormwater managers need modeling tools to evaluate alternative plans for environmental quality restoration and protection needs in urban and developing areas. A watershed-scale decision-support system, based on cost optimization, provides an essential tool to suppo...
NASA Astrophysics Data System (ADS)
Michaelis, Dirk; Schroeder, Andreas
2012-11-01
Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
Synthesis: Deriving a Core Set of Recommendations to Optimize Diabetes Care on a Global Scale.
Mechanick, Jeffrey I; Leroith, Derek
2015-01-01
Diabetes afflicts 382 million people worldwide, with increasing prevalence rates and adverse effects on health, well-being, and society in general. There are many drivers for the complex presentation of diabetes, including environmental and genetic/epigenetic factors. The aim was to synthesize a core set of recommendations from information from 14 countries that can be used to optimize diabetes care on a global scale. Information from 14 papers in this special issue of Annals of Global Health was reviewed, analyzed, and sorted to synthesize recommendations. PubMed was searched for relevant studies on diabetes and global health. Key findings are as follows: (1) Population-based transitions distinguish region-specific diabetes care; (2) biological drivers for diabetes differ among various populations and need to be clarified scientifically; (3) principal resource availability determines quality-of-care metrics; and (4) governmental involvement, independent of economic barriers, improves the contextualization of diabetes care. Core recommendations are as follows: (1) Each nation should assess region-specific epidemiology, the scientific evidence base, and population-based transitions to establish risk-stratified guidelines for diagnosis and therapeutic interventions; (2) each nation should establish a public health imperative to provide tools and funding to successfully implement these guidelines; and (3) each nation should commit to education and research to optimize recommendations for a durable effect. Systematic acquisition of information about diabetes care can be analyzed, extrapolated, and then used to provide a core set of actionable recommendations that may be further studied and implemented to improve diabetes care on a global scale. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Xie, Shilin; Lu, Fei; Cao, Lei; Zhou, Weiqi; Ouyang, Zhiyun
2016-01-01
Understanding the factors that influence the characteristics of avian communities using urban parks at both the patch and landscape level is important to focus management effort towards enhancing bird diversity. Here, we investigated this issue during the breeding season across urban parks in Beijing, China, using high-resolution satellite imagery. Fifty-two bird species were recorded across 29 parks. Analysis of residence type of birds showed that passengers were the most prevalent (37%), indicating that Beijing is a major node in the East Asian–Australasian Flyway. Park size was crucial for total species abundance, but foliage height diversity was the most important factor influencing avian species diversity. Thus, optimizing the configuration of vertical vegetation structure in certain park areas is critical for supporting avian communities in urban parks. Human visitation also showed negative impact on species diversity. At the landscape level, the percentage of artificial surface and largest patch index of woodland in the buffer region significantly affected total species richness, with insectivores and granivores being more sensitive to the landscape pattern of the buffer region. In conclusion, urban birds in Beijing are influenced by various multi-scale factors; however, these effects vary with different feeding types. PMID:27404279
Li, Li-Rong; Lin, Mei-Guang; Liang, Juan; Hu, Qiong-Yan; Chen, Dan; Lan, Meng-Ying; Liang, Wu-Qing; Zeng, Yu-Ting; Wang, Ting; Fu, Gui-Fen
2017-07-19
BACKGROUND This study aimed to explore the factors affecting the level of hope and psychological health status of patients with cervical cancer (CC) during radiotherapy. MATERIAL AND METHODS A total of 480 CC patients were recruited. Psychological distress scale, Herth hope index, functional assessment cancer therapy-cervix, and Jolowiec coping scale were used to conduct surveys on psychological distress, level of hope, quality of life (QOL), and coping style to analyze the factors affecting the level of hope and psychological health status of CC patients. RESULTS The morbidity of significant psychological distress in 480 CC patients during radiotherapy was 68%, and the main factors causing psychological distress were emotional problems and physical problems. During radiotherapy, most patients had middle and high levels of hope, and the psychological distress index of patients was negatively correlated with the level of hope. The QOL of CC patients during radiotherapy were at middle and high levels, and the QOL was positively correlated with confrontment, optimism, appeasement, and self-reliance, but it was negatively correlated with predestination and emotional expression. CONCLUSIONS For CC patients during radiotherapy, the morbidity of psychological distress was high, but they were at middle and high levels of hope.
Li, Li-Rong; Lin, Mei-Guang; Liang, Juan; Hu, Qiong-Yan; Chen, Dan; Lan, Meng-Ying; Liang, Wu-Qing; Zeng, Yu-Ting; Wang, Ting; Fu, Gui-Fen
2017-01-01
Background This study aimed to explore the factors affecting the level of hope and psychological health status of patients with cervical cancer (CC) during radiotherapy. Material/Methods A total of 480 CC patients were recruited. Psychological distress scale, Herth hope index, functional assessment cancer therapy-cervix, and Jolowiec coping scale were used to conduct surveys on psychological distress, level of hope, quality of life (QOL), and coping style to analyze the factors affecting the level of hope and psychological health status of CC patients. Results The morbidity of significant psychological distress in 480 CC patients during radiotherapy was 68%, and the main factors causing psychological distress were emotional problems and physical problems. During radiotherapy, most patients had middle and high levels of hope, and the psychological distress index of patients was negatively correlated with the level of hope. The QOL of CC patients during radiotherapy were at middle and high levels, and the QOL was positively correlated with confrontment, optimism, appeasement, and self-reliance, but it was negatively correlated with predestination and emotional expression. Conclusions For CC patients during radiotherapy, the morbidity of psychological distress was high, but they were at middle and high levels of hope. PMID:28720749
Predictors of Stress in College Students.
Saleh, Dalia; Camart, Nathalie; Romo, Lucia
2017-01-01
University students often face different stressful situations and preoccupations: the first contact with the university, the freedom of schedule organization, the selection of their master's degree, very selective fields, etc. The purpose of this study is to evaluate a model of vulnerability to stress in French college students. Stress factors were evaluated by a battery of six scales that was accessible online during 3 months. A total of 483 students, aged between 18 and 24 years (Mean = 20.23, standard deviation = 1.99), was included in the study. The results showed that 72.9, 86.3, and 79.3% of them were suffering from psychological distress, anxiety and depressive symptoms, respectively. More than half the sample was also suffering from low self-esteem (57.6%), little optimism (56.7%), and a low sense of self-efficacy (62.7%). Regression analyses revealed that life satisfaction, self-esteem, optimism, self-efficacy and psychological distress were the most important predictors of stress. These findings allow us to better understand stress-vulnerability factors in students and drive us to substantially consider them in prevention programs.
Xu, Guangkuan; Hao, Changchun; Tian, Suyang; Gao, Feng; Sun, Wenyuan; Sun, Runguang
2017-01-15
This study investigated a new and easy-to-industrialized extracting method for curcumin from Curcuma longa rhizomes using ultrasonic extraction technology combined with ammonium sulfate/ethanol aqueous two-phase system (ATPS), and the preparation of curcumin using the semi-preparative HPLC. The single-factor experiments and response surface methodology (RSM) were utilized to determine the optimal material-solvent ratio, ultrasonic intensity (UI) and ultrasonic time. The optimum extraction conditions were finally determined to be material-solvent rate of 3.29:100, ultrasonic intensity of 33.63W/cm 2 and ultrasonic time of 17min. At these optimum conditions, the extraction yield could reach 46.91mg/g. And the extraction yields of curcumin remained stable in the case of amplification, which indicated that scale-up extraction was feasible and efficient. Afterwards, the semi-preparative HPLC experiment was carried out, in which optimal preparation conditions were elected according to the single factor experiment. The prepared curcumin was obtained and the purity could up to 85.58% by the semi-preparative HPLC. Copyright © 2016 Elsevier B.V. All rights reserved.
Predictors of Stress in College Students
Saleh, Dalia; Camart, Nathalie; Romo, Lucia
2017-01-01
University students often face different stressful situations and preoccupations: the first contact with the university, the freedom of schedule organization, the selection of their master's degree, very selective fields, etc. The purpose of this study is to evaluate a model of vulnerability to stress in French college students. Stress factors were evaluated by a battery of six scales that was accessible online during 3 months. A total of 483 students, aged between 18 and 24 years (Mean = 20.23, standard deviation = 1.99), was included in the study. The results showed that 72.9, 86.3, and 79.3% of them were suffering from psychological distress, anxiety and depressive symptoms, respectively. More than half the sample was also suffering from low self-esteem (57.6%), little optimism (56.7%), and a low sense of self-efficacy (62.7%). Regression analyses revealed that life satisfaction, self-esteem, optimism, self-efficacy and psychological distress were the most important predictors of stress. These findings allow us to better understand stress-vulnerability factors in students and drive us to substantially consider them in prevention programs. PMID:28179889
Use of soybean oil and ammonium sulfate additions to optimize secondary metabolite production.
Junker, B; Mann, Z; Gailliot, P; Byrne, K; Wilson, J
1998-12-05
A valine-overproducing mutant (MA7040, Streptomyces hygroscopicus) was found to produce 1.5 to 2.0 g/L of the immunoregulant, L-683,590, at the 0.6 m3 fermentation scale in a simple batch process using soybean oil and ammonium sulfate-based GYG5 medium. Levels of both lower (L-683,795) and higher (HH1 and HH2) undesirable homolog levels were controlled adequately. This batch process was utilized to produce broth economically at the 19 m3 fermentation scale. Material of acceptable purity was obtained without the multiple pure crystallizations previously required for an earlier culture, MA6678, requiring valine supplementation for impurity control. Investigations at the 0.6 m3 fermentation scale were conducted, varying agitation, pH, initial soybean oil/ammonium sulfate charges, and initial aeration rate to further improve growth and productivity. Mid-cycle ammonia levels and lipase activity appeared to have an important role. Using mid-cycle soybean oil additions, a titer of 2.3 g/L of L-683,590 was obtained, while titers reached 2.7 g/L using mid-cycle soybean oil and ammonium sulfate additions. Both higher and lower homolog levels remained acceptable during this fed-batch process. Optimal timing of mid-cycle oil and ammonium sulfate additions was considered a critical factor to further titer improvements. Copyright 1998 John Wiley & Sons, Inc.
Muley, Pranjali D; Boldor, Dorin
2012-01-01
Use of advanced microwave technology for biodiesel production from vegetable oil is a relatively new technology. Microwave dielectric heating increases the process efficiency and reduces reaction time. Microwave heating depends on various factors such as material properties (dielectric and thermo-physical), frequency of operation and system design. Although lab scale results are promising, it is important to study these parameters and optimize the process before scaling up. Numerical modeling approach can be applied for predicting heating and temperature profiles including at larger scale. The process can be studied for optimization without actually performing the experiments, reducing the amount of experimental work required. A basic numerical model of continuous electromagnetic heating of biodiesel precursors was developed. A finite element model was built using COMSOL Multiphysics 4.2 software by coupling the electromagnetic problem with the fluid flow and heat transfer problem. Chemical reaction was not taken into account. Material dielectric properties were obtained experimentally, while the thermal properties were obtained from the literature (all the properties were temperature dependent). The model was tested for the two different power levels 4000 W and 4700 W at a constant flow rate of 840ml/min. The electric field, electromagnetic power density flow and temperature profiles were studied. Resulting temperature profiles were validated by comparing to the temperatures obtained at specific locations from the experiment. The results obtained were in good agreement with the experimental data.
Optimizing snake locomotion on an inclined plane
NASA Astrophysics Data System (ADS)
Wang, Xiaolin; Osborne, Matthew T.; Alben, Silas
2014-01-01
We develop a model to study the locomotion of snakes on inclined planes. We determine numerically which snake motions are optimal for two retrograde traveling-wave body shapes, triangular and sinusoidal waves, across a wide range of frictional parameters and incline angles. In the regime of large transverse friction coefficients, we find power-law scalings for the optimal wave amplitudes and corresponding costs of locomotion. We give an asymptotic analysis to show that the optimal snake motions are traveling waves with amplitudes given by the same scaling laws found in the numerics.
2014-01-01
Background Anxiety scales may help primary care physicians to detect specific anxiety disorders among the many emotionally distressed patients presenting in primary care. The anxiety scale of the Four-Dimensional Symptom Questionnaire (4DSQ) consists of an admixture of symptoms of specific anxiety disorders. The research questions were: (1) Is the anxiety scale unidimensional or multidimensional? (2) To what extent does the anxiety scale detect specific DSM-IV anxiety disorders? (3) Which cut-off points are suitable to rule out or to rule in (which) anxiety disorders? Methods We analyzed 5 primary care datasets with standardized psychiatric diagnoses and 4DSQ scores. Unidimensionality was assessed through confirmatory factor analysis (CFA). We examined mean scores and anxiety score distributions per disorder. Receiver operating characteristic (ROC) analysis was used to determine optimal cut-off points. Results Total n was 969. CFA supported unidimensionality. The anxiety scale performed slightly better in detecting patients with panic disorder, agoraphobia, social phobia, obsessive compulsive disorder (OCD) and post traumatic stress disorder (PTSD) than patients with generalized anxiety disorder (GAD) and specific phobia. ROC-analysis suggested that ≥4 was the optimal cut-off point to rule out and ≥10 the cut-off point to rule in anxiety disorders. Conclusions The 4DSQ anxiety scale measures a common trait of pathological anxiety that is characteristic of anxiety disorders, in particular panic disorder, agoraphobia, social phobia, OCD and PTSD. The anxiety score detects the latter anxiety disorders to a slightly greater extent than GAD and specific phobia, without being able to distinguish between the different anxiety disorder types. The cut-off points ≥4 and ≥10 can be used to separate distressed patients in three groups with a relatively low, moderate and high probability of having one or more anxiety disorders. PMID:24761829
NASA Astrophysics Data System (ADS)
Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng
2018-02-01
A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method
An adaptive response surface method for crashworthiness optimization
NASA Astrophysics Data System (ADS)
Shi, Lei; Yang, Ren-Jye; Zhu, Ping
2013-11-01
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.
Singh, Digar; Kaur, Gurvinder
2014-08-01
The optimization of bioreactor operations towards swainsonine production was performed using an artificial neural network coupled evolutionary program (EP)-based optimization algorithm fitted with experimental one-factor-at-a-time (OFAT) results. The effects of varying agitation (300-500 rpm) and aeration (0.5-2.0 vvm) rates for different incubation hours (72-108 h) were evaluated in bench top bioreactor. Prominent scale-up parameters, gassed power per unit volume (P g/V L, W/m(3)) and volumetric oxygen mass transfer coefficient (K L a, s(-1)) were correlated with optimized conditions. A maximum of 6.59 ± 0.10 μg/mL of swainsonine production was observed at 400 rpm-1.5 vvm at 84 h in OFAT experiments with corresponding P g/VL and K L a values of 91.66 W/m(3) and 341.48 × 10(-4) s(-1), respectively. The EP optimization algorithm predicted a maximum of 10.08 μg/mL of swainsonine at 325.47 rpm, 1.99 vvm and 80.75 h against the experimental production of 7.93 ± 0.52 μg/mL at constant K L a (349.25 × 10(-4) s(-1)) and significantly reduced P g/V L (33.33 W/m(3)) drawn by the impellers.
Dinarvand, Mojdeh; Rezaee, Malahat; Masomian, Malihe; Jazayeri, Seyed Davoud; Zareian, Mohsen; Abbasi, Sahar; Ariff, Arbakariya B.
2013-01-01
The study is to identify the extraction of intracellular inulinase (exo- and endoinulinase) and invertase as well as optimization medium composition for maximum productions of intra- and extracellular enzymes from Aspergillus niger ATCC 20611. From two different methods for extraction of intracellular enzymes, ultrasonic method was found more effective. Response surface methodology (RSM) with a five-variable and three-level central composite design (CCD) was employed to optimize the medium composition. The effect of five main reaction parameters including sucrose, yeast extract, NaNO3, Zn+2, and Triton X-100 on the production of enzymes was analyzed. A modified quadratic model was fitted to the data with a coefficient of determination (R 2) more than 0.90 for all responses. The intra-extracellular inulinase and invertase productions increased in the range from 16 to 8.4 times in the optimized medium (10% (w/v) sucrose, 2.5% (w/v) yeast extract, 2% (w/v) NaNO3, 1.5 mM (v/v) Zn+2, and 1% (v/v) Triton X-100) by RSM and from around 1.2 to 1.3 times greater than in the medium optimized by one-factor-at-a-time, respectively. The results of bioprocesses optimization can be useful in the scale-up fermentation and food industry. PMID:24151605
On the Water-Food Nexus: an Optimization Approach for Water and Food Security
NASA Astrophysics Data System (ADS)
Mortada, Sarah; Abou Najm, Majdi; Yassine, Ali; Alameddine, Ibrahim; El-Fadel, Mutasem
2016-04-01
Water and food security is facing increased challenges with population increase, climate and land use change, as well as resource depletion coupled with pollution and unsustainable practices. Coordinated and effective management of limited natural resources have become an imperative to meet these challenges by optimizing the usage of resources under various constraints. In this study, an optimization model is developed for optimal resource allocation towards sustainable water and food security under nutritional, socio-economic, agricultural, environmental, and natural resources constraints. The core objective of this model is to maximize the composite water-food security status by recommending an optimal water and agricultural strategy. The model balances between the healthy nutritional demand side and the constrained supply side while considering the supply chain in between. It equally ensures that the population achieves recommended nutritional guidelines and population food-preferences by quantifying an optimum agricultural and water policy through transforming optimum food demands into optimum cropping policy given the water and land footprints of each crop or agricultural product. Through this process, water and food security are optimized considering factors that include crop-food transformation (food processing), water footprints, crop yields, climate, blue and green water resources, irrigation efficiency, arable land resources, soil texture, and economic policies. The model performance regarding agricultural practices and sustainable food and water security was successfully tested and verified both at a hypothetical and pilot scale levels.
Measurement of narcolepsy symptoms: The Narcolepsy Severity Scale.
Dauvilliers, Yves; Beziat, Severine; Pesenti, Carole; Lopez, Regis; Barateau, Lucie; Carlander, Bertrand; Luca, Gianina; Tafti, Mehdi; Morin, Charles M; Billiard, Michel; Jaussent, Isabelle
2017-04-04
To validate the Narcolepsy Severity Scale (NSS), a brief clinical instrument to evaluate the severity and consequences of symptoms in patients with narcolepsy type 1 (NT1). A 15-item scale to assess the frequency and severity of excessive daytime sleepiness, cataplexy, hypnagogic hallucinations, sleep paralysis, and disrupted nighttime sleep was developed and validated by sleep experts with patients' feedback. Seventy untreated and 146 treated adult patients with NT1 were evaluated and completed the NSS in a single reference sleep center. The NSS psychometric properties, score changes with treatment, and convergent validity with other clinical parameters were assessed. The NSS showed good psychometric properties with significant item-total score correlations. The factor analysis indicated a 3-factor solution with good reliability, expressed by satisfactory Cronbach α values. The NSS total score temporal stability was good. Significant NSS score differences were observed between untreated and treated patients (dependent sample, 41 patients before and after sleep therapy; independent sample, 29 drug-free and 105 treated patients). Scores were lower in the treated populations (10-point difference between groups), without ceiling effect. Significant correlations were found among NSS total score and daytime sleepiness (Epworth Sleepiness Scale, Mean Sleep Latency Test), depressive symptoms, and health-related quality of life. The NSS can be considered a reliable and valid clinical tool for the quantification of narcolepsy symptoms to monitor and optimize narcolepsy management. © 2017 American Academy of Neurology.
Marfeo, Elizabeth E.; Ni, Pengsheng; Haley, Stephen M.; Jette, Alan M.; Bogusz, Kara; Meterko, Mark; McDonough, Christine M.; Chan, Leighton; Brandt, Diane E.; Rasch, Elizabeth K.
2014-01-01
Objectives To develop a broad set of claimant-reported items to assess behavioral health functioning relevant to the Social Security disability determination processes, and to evaluate the underlying structure of behavioral health functioning for use in development of a new functional assessment instrument. Design Cross-sectional. Setting Community. Participants Item pools of behavioral health functioning were developed, refined, and field-tested in a sample of persons applying for Social Security disability benefits (N=1015) who reported difficulties working due to mental or both mental and physical conditions. Interventions None. Main Outcome Measure Social Security Administration Behavioral Health (SSA-BH) measurement instrument Results Confirmatory factor analysis (CFA) specified that a 4-factor model (self-efficacy, mood and emotions, behavioral control, and social interactions) had the optimal fit with the data and was also consistent with our hypothesized conceptual framework for characterizing behavioral health functioning. When the items within each of the four scales were tested in CFA, the fit statistics indicated adequate support for characterizing behavioral health as a unidimensional construct along these four distinct scales of function. Conclusion This work represents a significant advance both conceptually and psychometrically in assessment methodologies for work related behavioral health. The measurement of behavioral health functioning relevant to the context of work requires the assessment of multiple dimensions of behavioral health functioning. Specifically, we identified a 4-factor model solution that represented key domains of work related behavioral health functioning. These results guided the development and scale formation of a new SSA-BH instrument. PMID:23548542
Marfeo, Elizabeth E; Ni, Pengsheng; Haley, Stephen M; Jette, Alan M; Bogusz, Kara; Meterko, Mark; McDonough, Christine M; Chan, Leighton; Brandt, Diane E; Rasch, Elizabeth K
2013-09-01
To develop a broad set of claimant-reported items to assess behavioral health functioning relevant to the Social Security disability determination processes, and to evaluate the underlying structure of behavioral health functioning for use in development of a new functional assessment instrument. Cross-sectional. Community. Item pools of behavioral health functioning were developed, refined, and field tested in a sample of persons applying for Social Security disability benefits (N=1015) who reported difficulties working because of mental or both mental and physical conditions. None. Social Security Administration Behavioral Health (SSA-BH) measurement instrument. Confirmatory factor analysis (CFA) specified that a 4-factor model (self-efficacy, mood and emotions, behavioral control, social interactions) had the optimal fit with the data and was also consistent with our hypothesized conceptual framework for characterizing behavioral health functioning. When the items within each of the 4 scales were tested in CFA, the fit statistics indicated adequate support for characterizing behavioral health as a unidimensional construct along these 4 distinct scales of function. This work represents a significant advance both conceptually and psychometrically in assessment methodologies for work-related behavioral health. The measurement of behavioral health functioning relevant to the context of work requires the assessment of multiple dimensions of behavioral health functioning. Specifically, we identified a 4-factor model solution that represented key domains of work-related behavioral health functioning. These results guided the development and scale formation of a new SSA-BH instrument. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Tip vortices in the actuator line model
NASA Astrophysics Data System (ADS)
Martinez, Luis; Meneveau, Charles
2017-11-01
The actuator line model (ALM) is a widely used tool to represent the wind turbine blades in computational fluid dynamics without the need to resolve the full geometry of the blades. The ALM can be optimized to represent the `correct' aerodynamics of the blades by choosing an appropriate smearing length scale ɛ. This appropriate length scale creates a tip vortex which induces a downwash near the tip of the blade. A theoretical frame-work is used to establish a solution to the induced velocity created by a tip vortex as a function of the smearing length scale ɛ. A correction is presented which allows the use of a non-optimal smearing length scale but still provides the downwash which would be induced using the optimal length scale. Thanks to the National Science Foundation (NSF) who provided financial support for this research via Grants IGERT 0801471, IIA-1243482 (the WINDINSPIRE project) and ECCS-1230788.
Chen, Hung-Yuan; Chiu, Yen-Ling; Hsu, Shih-Ping; Pai, Mei-Fen; Ju-YehYang; Lai, Chun-Fu; Lu, Hui-Min; Huang, Shu-Chen; Yang, Shao-Yu; Wen, Su-Yin; Chiu, Hsien-Ching; Hu, Fu-Chang; Peng, Yu-Sen; Jee, Shiou-Hwa
2013-01-01
Background Uremic pruritus is a common and intractable symptom in patients on chronic hemodialysis, but factors associated with the severity of pruritus remain unclear. This study aimed to explore the associations of metabolic factors and dialysis adequacy with the aggravation of pruritus. Methods We conducted a 5-year prospective cohort study on patients with maintenance hemodialysis. A visual analogue scale (VAS) was used to assess the intensity of pruritus. Patient demographic and clinical characteristics, laboratory parameters, dialysis adequacy (assessed by Kt/V), and pruritus intensity were recorded at baseline and follow-up. Change score analysis of the difference score of VAS between baseline and follow-up was performed using multiple linear regression models. The optimal threshold of Kt/V, which is associated with the aggravation of uremic pruritus, was determined by generalized additive models and receiver operating characteristic analysis. Results A total of 111 patients completed the study. Linear regression analysis showed that lower Kt/V and use of low-flux dialyzer were significantly associated with the aggravation of pruritus after adjusting for the baseline pruritus intensity and a variety of confounding factors. The optimal threshold value of Kt/V for pruritus was 1.5 suggested by both generalized additive models and receiver operating characteristic analysis. Conclusions Hemodialysis with the target of Kt/V ≥1.5 and use of high-flux dialyzer may reduce the intensity of pruritus in patients on chronic hemodialysis. Further clinical trials are required to determine the optimal dialysis dose and regimen for uremic pruritus. PMID:23940749
A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniquesmore » in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.« less
Yang, Jin-ling; He, Hui-xia; Zhu, Hui-xin; Cheng, Ke-di; Zhu, Ping
2009-01-01
The technology of liquid fermentation for producing the recombinant analgesic peptide BmK AngM1 from Buthus martensii Karsch in Pichia pastoris was studied by single-factor and orthogonal test. The results showed that the optimal culture conditions were as follows: 1.2% methanol, 0.6% casamino acids, initial pH 6.0, and three times of basal inoculation volume. Under the above culture conditions, the expression level of recombinant BmK AngM1 in Pichia pastoris was above 500 mg x L(-1), which was more than three times of the control. The study has laid a foundation for the large-scale preparation of BmK AngM1 to meet the needs of theoretical research of BmK AngM1 and development of new medicines.
Akard, Luke P; Bixby, Dale
2016-05-01
Multiple BCR-ABL tyrosine kinase inhibitors (TKIs) are available for the treatment of chronic myeloid leukemia in chronic phase (CML-CP), and several baseline and on-treatment predictive factors have been identified that can be used to help guide TKI selection for individual patients. In particular, early molecular response (EMR; BCR-ABL ≤10% on the International Scale at 3 months) has become an accepted benchmark for evaluating whether patients with CML-CP are responding optimally to frontline TKI therapy. Failure to achieve EMR is considered an inadequate initial response according to the National Comprehensive Cancer Network guidelines and a warning response according to the European LeukemiaNet recommendations. Here we review data supporting the importance of achieving EMR for improving patients' long-term outcomes and discuss key considerations for selecting a frontline TKI in light of these data. Because a higher proportion of patients achieve EMR with second-generation TKIs such as nilotinib and dasatinib than with imatinib, these TKIs may be preferable for many patients, particularly those with known negative prognostic factors at baseline. We also discuss other considerations for frontline TKI choice, including toxicities, cost-effectiveness, and the emerging goals of deep molecular response and treatment-free remission.
Engineering tolerance to industrially relevant stress factors in yeast cell factories.
Deparis, Quinten; Claes, Arne; Foulquié-Moreno, Maria R; Thevelein, Johan M
2017-06-01
The main focus in development of yeast cell factories has generally been on establishing optimal activity of heterologous pathways and further metabolic engineering of the host strain to maximize product yield and titer. Adequate stress tolerance of the host strain has turned out to be another major challenge for obtaining economically viable performance in industrial production. Although general robustness is a universal requirement for industrial microorganisms, production of novel compounds using artificial metabolic pathways presents additional challenges. Many of the bio-based compounds desirable for production by cell factories are highly toxic to the host cells in the titers required for economic viability. Artificial metabolic pathways also turn out to be much more sensitive to stress factors than endogenous pathways, likely because regulation of the latter has been optimized in evolution in myriads of environmental conditions. We discuss different environmental and metabolic stress factors with high relevance for industrial utilization of yeast cell factories and the experimental approaches used to engineer higher stress tolerance. Improving stress tolerance in a predictable manner in yeast cell factories should facilitate their widespread utilization in the bio-based economy and extend the range of products successfully produced in large scale in a sustainable and economically profitable way. © FEMS 2017.
Engineering tolerance to industrially relevant stress factors in yeast cell factories
Deparis, Quinten; Claes, Arne; Foulquié-Moreno, Maria R.
2017-01-01
Abstract The main focus in development of yeast cell factories has generally been on establishing optimal activity of heterologous pathways and further metabolic engineering of the host strain to maximize product yield and titer. Adequate stress tolerance of the host strain has turned out to be another major challenge for obtaining economically viable performance in industrial production. Although general robustness is a universal requirement for industrial microorganisms, production of novel compounds using artificial metabolic pathways presents additional challenges. Many of the bio-based compounds desirable for production by cell factories are highly toxic to the host cells in the titers required for economic viability. Artificial metabolic pathways also turn out to be much more sensitive to stress factors than endogenous pathways, likely because regulation of the latter has been optimized in evolution in myriads of environmental conditions. We discuss different environmental and metabolic stress factors with high relevance for industrial utilization of yeast cell factories and the experimental approaches used to engineer higher stress tolerance. Improving stress tolerance in a predictable manner in yeast cell factories should facilitate their widespread utilization in the bio-based economy and extend the range of products successfully produced in large scale in a sustainable and economically profitable way. PMID:28586408
Ahmed, A Bakrudeen Ali; Rao, A S; Rao, M V; Taha, Rosna Mat
2012-01-01
Gymnema sylvestre (R.Br.) is an important diabetic medicinal plant which yields pharmaceutically active compounds called gymnemic acid (GA). The present study describes callus induction and the subsequent batch culture optimization and GA quantification determined by linearity, precision, accuracy, and recovery. Best callus induction of GA was noticed in MS medium combined with 2,4-D (1.5 mg/L) and KN (0.5 mg/L). Evaluation and isolation of GA from the calluses derived from different plant parts, namely, leaf, stem and petioles have been done in the present case for the first time. Factors such as light, temperature, sucrose, and photoperiod were studied to observe their effect on GA production. Temperature conditions completely inhibited GA production. Out of the different sucrose concentrations tested, the highest yield (35.4 mg/g d.w) was found at 5% sucrose followed by 12 h photoperiod (26.86 mg/g d.w). Maximum GA production (58.28 mg/g d.w) was observed in blue light. The results showed that physical and chemical factors greatly influence the production of GA in callus cultures of G. sylvestre. The factors optimized for in vitro production of GA during the present study can successfully be employed for their large-scale production in bioreactors.
Biorthogonal projected energies of a Gutzwiller similarity transformed Hamiltonian.
Wahlen-Strothman, J M; Scuseria, G E
2016-12-07
We present a method incorporating biorthogonal orbital-optimization, symmetry projection, and double-occupancy screening with a non-unitary similarity transformation generated by the Gutzwiller factor [Formula: see text], and apply it to the Hubbard model. Energies are calculated with mean-field computational scaling with high-quality results comparable to coupled cluster singles and doubles. This builds on previous work performing similarity transformations with more general, two-body Jastrow-style correlators. The theory is tested on 2D lattices ranging from small systems into the thermodynamic limit and is compared to available reference data.
Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.
Slażyński, Leszek; Bohte, Sander
2012-01-01
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.
Mantzouridou, Fani Th; Naziri, Eleni
2017-03-01
This study deals with the scale up of Blakeslea trispora culture from the successful surface-aerated shake flasks to dispersed-bubble aerated column reactor for lycopene production in the presence of lycopene cyclase inhibitor 2-methyl imidazole. Controlling the initial volumetric oxygen mass transfer coefficient (k L a) via airflow rate contributes to increasing cell mass and lycopene accumulation. Inhibitor effectiveness seems to decrease in conditions of high cell mass. Optimization of crude soybean oil (CSO), airflow rate, and 2-methyl imidazole was arranged according to central composite statistical design. The optimized levels of factors were 110.5 g/L, 2.3 vvm, and 29.5 mg/L, respectively. At this optimum setting, maximum lycopene yield (256 mg/L) was comparable or even higher to those reported in shake flasks and stirred tank reactor. 2-Methyl imidazole use at levels significantly lower than those reported for other inhibitors in the literature was successful in terms of process selectivity. CSO provides economic benefits to the process through its ability to stimulate lycopene synthesis, as an inexpensive carbon source and oxygen vector at the same time.
NASA Astrophysics Data System (ADS)
Karabacak, M.; Kurt, M.; Cinar, M.; Ayyappan, S.; Sudha, S.; Sundaraganesan, N.
In this work, experimental and theoretical study on the molecular structure and the vibrational spectra of 3-aminobenzophenone (3-ABP) is presented. The vibrational frequencies of the title compound were obtained theoretically by DFT/B3LYP calculations employing the standard 6-311++G(d,p) basis set for optimized geometry and were compared with Fourier transform infrared spectrum (FTIR) in the region of 400-4000 cm-1 and with Fourier Transform Raman spectrum in the region of 50-4000 cm-1. Complete vibrational assignments, analysis and correlation of the fundamental modes for the title compound were carried out. The vibrational harmonic frequencies were scaled using scale factor, yielding a good agreement between the experimentally recorded and the theoretically calculated values.
NASA Astrophysics Data System (ADS)
Chen, Xin; Chen, Wenchao; Wang, Xiaokai; Wang, Wei
2017-10-01
Low-frequency oscillatory ground-roll is regarded as one of the main regular interference waves, which obscures primary reflections in land seismic data. Suppressing the ground-roll can reasonably improve the signal-to-noise ratio of seismic data. Conventional suppression methods, such as high-pass and various f-k filtering, usually cause waveform distortions and loss of body wave information because of their simple cut-off operation. In this study, a sparsity-optimized separation of body waves and ground-roll, which is based on morphological component analysis theory, is realized by constructing dictionaries using tunable Q-factor wavelet transforms with different Q-factors. Our separation model is grounded on the fact that the input seismic data are composed of low-oscillatory body waves and high-oscillatory ground-roll. Two different waveform dictionaries using a low Q-factor and a high Q-factor, respectively, are confirmed as able to sparsely represent each component based on their diverse morphologies. Thus, seismic data including body waves and ground-roll can be nonlinearly decomposed into low-oscillatory and high-oscillatory components. This is a new noise attenuation approach according to the oscillatory behaviour of the signal rather than the scale or frequency. We illustrate the method using both synthetic and field shot data. Compared with results from conventional high-pass and f-k filtering, the results of the proposed method prove this method to be effective and advantageous in preserving the waveform and bandwidth of reflections.
Liter-scale production of uniform gas bubbles via parallelization of flow-focusing generators.
Jeong, Heon-Ho; Yadavali, Sagar; Issadore, David; Lee, Daeyeon
2017-07-25
Microscale gas bubbles have demonstrated enormous utility as versatile templates for the synthesis of functional materials in medicine, ultra-lightweight materials and acoustic metamaterials. In many of these applications, high uniformity of the size of the gas bubbles is critical to achieve the desired properties and functionality. While microfluidics have been used with success to create gas bubbles that have a uniformity not achievable using conventional methods, the inherently low volumetric flow rate of microfluidics has limited its use in most applications. Parallelization of liquid droplet generators, in which many droplet generators are incorporated onto a single chip, has shown great promise for the large scale production of monodisperse liquid emulsion droplets. However, the scale-up of monodisperse gas bubbles using such an approach has remained a challenge because of possible coupling between parallel bubbles generators and feedback effects from the downstream channels. In this report, we systematically investigate the effect of factors such as viscosity of the continuous phase, capillary number, and gas pressure as well as the channel uniformity on the size distribution of gas bubbles in a parallelized microfluidic device. We show that, by optimizing the flow conditions, a device with 400 parallel flow focusing generators on a footprint of 5 × 5 cm 2 can be used to generate gas bubbles with a coefficient of variation of less than 5% at a production rate of approximately 1 L h -1 . Our results suggest that the optimization of flow conditions using a device with a small number (e.g., 8) of parallel FFGs can facilitate large-scale bubble production.
NASA Astrophysics Data System (ADS)
Anikin, A. E.; Galevsky, G. V.; Nozdrin, E. V.; Rudneva, V. V.; Galevsky, S. G.
2016-09-01
The research of the metallization process of the roll scale and sludge after gas treatment in the BOF production with the use of brown coal semicoke mined in Berezovsky field of the Kansk-Achinsk Basin was carried out. A flow diagram of “cold” briquetting using a water-soluble binder was offered. The reduction of iron from its oxide Fe2O3 with brown coal semicoke in the laboratory electric-tube furnace in the argon atmosphere was studied. The mathematical models of dependence of the metallization degree on variable factors were developed. The optimal values of technological factors and essential characteristics of the obtained metallized products were revealed.
Lee, Meonghun; Yoe, Hyun
2015-01-01
The environment promotes evolution. Evolutionary processes represent environmental adaptations over long time scales; evolution of crop genomes is not inducible within the relatively short time span of a human generation. Extreme environmental conditions can accelerate evolution, but such conditions are often stress inducing and disruptive. Artificial growth systems can be used to induce and select genomic variation by changing external environmental conditions, thus, accelerating evolution. By using cloud computing and big-data analysis, we analyzed environmental stress factors for Pleurotus ostreatus by assessing, evaluating, and predicting information of the growth environment. Through the indexing of environmental stress, the growth environment can be precisely controlled and developed into a technology for improving crop quality and production. PMID:25874206
Simulation-optimization of large agro-hydrosystems using a decomposition approach
NASA Astrophysics Data System (ADS)
Schuetze, Niels; Grundmann, Jens
2014-05-01
In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.
Choi, Insub; Kim, JunHee; Kim, Donghyun
2016-12-08
Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.
DOT National Transportation Integrated Search
2006-12-01
Over the last several years, researchers at the University of Arizonas ATLAS Center have developed an adaptive ramp : metering system referred to as MILOS (Multi-Objective, Integrated, Large-Scale, Optimized System). The goal of this project : is ...
Automatically Finding the Control Variables for Complex System Behavior
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2010-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the factors most likely to cause a mission-critical failure. The goal of this research is to comparatively assess treatment learning against state-of-the-art numerical optimization techniques. To achieve this, this paper benchmarks the TAR3 and TAR4.1 treatment learners against optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. The results clearly show that treatment learning is both faster and more accurate than traditional optimization methods.
NASA Astrophysics Data System (ADS)
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-07-01
This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
NASA Astrophysics Data System (ADS)
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-12-01
This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
Optimizing fusion PIC code performance at scale on Cori Phase 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koskela, T. S.; Deslippe, J.
In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Lee, Byeong-Ju; Zhou, Yaoyao; Lee, Jae Soung; Shin, Byeung Kon; Seo, Jeong-Ah; Lee, Doyup; Kim, Young-Suk
2018-01-01
The ability to determine the origin of soybeans is an important issue following the inclusion of this information in the labeling of agricultural food products becoming mandatory in South Korea in 2017. This study was carried out to construct a prediction model for discriminating Chinese and Korean soybeans using Fourier-transform infrared (FT-IR) spectroscopy and multivariate statistical analysis. The optimal prediction models for discriminating soybean samples were obtained by selecting appropriate scaling methods, normalization methods, variable influence on projection (VIP) cutoff values, and wave-number regions. The factors for constructing the optimal partial-least-squares regression (PLSR) prediction model were using second derivatives, vector normalization, unit variance scaling, and the 4000–400 cm–1 region (excluding water vapor and carbon dioxide). The PLSR model for discriminating Chinese and Korean soybean samples had the best predictability when a VIP cutoff value was not applied. When Chinese soybean samples were identified, a PLSR model that has the lowest root-mean-square error of the prediction value was obtained using a VIP cutoff value of 1.5. The optimal PLSR prediction model for discriminating Korean soybean samples was also obtained using a VIP cutoff value of 1.5. This is the first study that has combined FT-IR spectroscopy with normalization methods, VIP cutoff values, and selected wave-number regions for discriminating Chinese and Korean soybeans. PMID:29689113
The VCOP Scale: A Measure of Overprotection in Parents of Physically Vulnerable Children.
ERIC Educational Resources Information Center
Wright, Logan; And Others
1993-01-01
Developed Vulnerable Child/Overprotecting Parent Scale to measure overprotecting versus optimal developmental stimulation tendencies for parents of physically vulnerable children. Items were administered to parents whose parenting techniques had been rated as either highly overprotective or as optimal by group of physicians and other…
USDA-ARS?s Scientific Manuscript database
The performance of wood-based denitrifying bioreactors to treat high-nitrate wastewaters from aquaculture systems has not previously been demonstrated. Four pilot-scale woodchip bioreactors (approximately 1:10 scale) were constructed and operated for 268 d to determine the optimal range of design hy...
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.
Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen
2017-02-01
Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.
Superfluidity in Strongly Interacting Fermi Systems with Applications to Neutron Stars
NASA Astrophysics Data System (ADS)
Khodel, Vladimir
The rotational dynamics and cooling history of neutron stars is influenced by the superfluid properties of nucleonic matter. In this thesis a novel separation technique is applied to the analysis of the gap equation for neutron matter. It is shown that the problem can be recast into two tasks: solving a simple system of linear integral equations for the shape functions of various components of the gap function and solving a system of non-linear algebraic equations for their scale factors. Important simplifications result from the fact that the ratio of the gap amplitude to the Fermi energy provides a small parameter in this problem. The relationship between the analytic structure of the shape functions and the density interval for the existence of superfluid gap is discussed. It is shown that in 1S0 channel the position of the first zero of the shape function gives an estimate of the upper critical density. The relation between the resonant behavior of the two-neutron interaction in this channel and the density dependence of the gap is established. The behavior of the gap in the limits of low and high densities is analyzed. Various approaches to calculation of the scale factors are considered: model cases, angular averaging, and perturbation theory. An optimization-based approach is proposed. The shape functions and scale factors for Argonne υ14 and υ18 potentials are determined in singlet and triplet channels. Dependence of the solution on the value of effective mass and medium polarization is studied.
Development and validation of measures to assess prevention and control of AMR in hospitals.
Flanagan, Mindy; Ramanujam, Rangaraj; Sutherland, Jason; Vaughn, Thomas; Diekema, Daniel; Doebbeling, Bradley N
2007-06-01
The rapid spread of antimicrobial resistance (AMR) in the US hospitals poses serious quality and safety problems. Expert panels, identifying strategies for optimizing antibiotic use and preventing AMR spread, have recommended hospitals undertake efforts to implement specific evidence-based practices. To develop and validate a measurement scale for assessing hospitals' efforts to implement recommended AMR prevention and control measures. Surveys were mailed to infection control professionals in a national sample of 670 US hospitals stratified by geographic region, bedsize, teaching status, and VA affiliation. : Four hundred forty-eight infection control professionals participated (67% response rate). Survey items measured implementation of guideline recommendations, practices for AMR monitoring and feedback, AMR-related outcomes (methicillin-resistant Staphylococcus aureus prevalence and outbreaks [MRSA]), and organizational features. "Derivation" and "validation" samples were randomly selected. Exploratory factor analysis was performed to identify factors underlying AMR prevention and control efforts. Multiple methods were used for validation. We identified 4 empirically distinct factors in AMR prevention and control: (1) practices for antimicrobial prescription/use, (2) information/resources for AMR control, (3) practices for isolating infected patients, and (4) organizational support for infection control policies. The Prevention and Control of Antimicrobial Resistance scale was reliable and had content and construct validity. MRSA prevalence was significantly lower in hospitals with higher resource/information availability and broader organizational support. The Prevention and Control of Antimicrobial Resistance scale offers a simple yet discriminating assessment of AMR prevention and control efforts. Use should complement assessment methods based exclusively on AMR outcomes.
Shah, Nisarg J.; Hyder, Md. Nasim; Quadir, Mohiuddin A.; Dorval Courchesne, Noémie-Manuelle; Seeherman, Howard J.; Nevins, Myron; Spector, Myron; Hammond, Paula T.
2014-01-01
Traumatic wounds and congenital defects that require large-scale bone tissue repair have few successful clinical therapies, particularly for craniomaxillofacial defects. Although bioactive materials have demonstrated alternative approaches to tissue repair, an optimized materials system for reproducible, safe, and targeted repair remains elusive. We hypothesized that controlled, rapid bone formation in large, critical-size defects could be induced by simultaneously delivering multiple biological growth factors to the site of the wound. Here, we report an approach for bone repair using a polyelectrolye multilayer coating carrying as little as 200 ng of bone morphogenetic protein-2 and platelet-derived growth factor-BB that were eluted over readily adapted time scales to induce rapid bone repair. Based on electrostatic interactions between the polymer multilayers and growth factors alone, we sustained mitogenic and osteogenic signals with these growth factors in an easily tunable and controlled manner to direct endogenous cell function. To prove the role of this adaptive release system, we applied the polyelectrolyte coating on a well-studied biodegradable poly(lactic-co-glycolic acid) support membrane. The released growth factors directed cellular processes to induce bone repair in a critical-size rat calvaria model. The released growth factors promoted local bone formation that bridged a critical-size defect in the calvaria as early as 2 wk after implantation. Mature, mechanically competent bone regenerated the native calvaria form. Such an approach could be clinically useful and has significant benefits as a synthetic, off-the-shelf, cell-free option for bone tissue repair and restoration. PMID:25136093
Black, Maureen M.; Saavedra, Jose M.
2016-01-01
Interventions targeting parenting focused modifiable factors to prevent obesity and promote healthy growth in the first 1000 days of life are needed. Scale-up of interventions to global populations is necessary to reverse trends in weight status among infants and toddlers, and large scale dissemination will require understanding of effective strategies. Utilizing nutrition education theories, this paper describes the design of a digital-based nutrition guidance system targeted to first-time mothers to prevent obesity during the first two years. The multicomponent system consists of scientifically substantiated content, tools, and telephone-based professional support delivered in an anticipatory and sequential manner via the internet, email, and text messages, focusing on educational modules addressing the modifiable factors associated with childhood obesity. Digital delivery formats leverage consumer media trends and provide the opportunity for scale-up, unavailable to previous interventions reliant on resource heavy clinic and home-based counseling. Designed initially for use in the United States, this system's core features are applicable to all contexts and constitute an approach fostering healthy growth, not just obesity prevention. The multicomponent features, combined with a global concern for optimal growth and positive trends in mobile internet use, represent this system's future potential to affect change in nutrition practice in developing countries. PMID:27635257
Dimensional and categorical approaches to hypochondriasis.
Hiller, W; Rief, W; Fichter, M M
2002-05-01
The DSM-IV definition of hypochondriasis is contrasted with hypochondriacal dimensions as provided by the Whiteley Index (WI) and Illness Attitude Scales (IAS). Exploratory factor analysis was conducted on self-report data from 570 patients with mental and psychophysiological disorders. Of these, 319 were additionally diagnosed according to DSM-IV by structured interviews. The three 'classic' factors of the WI labelled disease phobia, somatic symptoms and disease conviction were confirmed. The IAS consisted of two dimensions indicating health anxiety and illness behaviour. The overall scores of both instruments were highly correlated (0.80). Optimal cut-off points for case identification yielded sensitivity/specificity rates of 71/80% (WI) and 72/79% (IAS). The IAS was superior to the WI when patients with hypochondriacal disorder were to be discriminated from non-hypochondriacal somatizers. Largest group differences were found for scales related to affective components (health anxieties), smallest for illness behaviours. Affective components of hypochondriasis explained more variance of diagnostic group membership than somatization symptoms. The subscales of disease phobia (WI) and health anxiety (IAS) were most sensitive to treatment-related changes. The self-rating scales are valid for screening, case definition and dimensional assessment of hypochondriacal disorder, including the differentiation between hypochondriasis and somatization. The existence of distinguishable affective and cognitive components was confirmed.
Villarroel, Mario; Castro, Ruth; Junod, Julio
2003-06-01
The goal of this present study was the development of an optimized formula of damask marmalade low in calories applying Taguchi methodology to improve the quality of this product. The selection of this methodology lies on the fact that in real life conditions the result of an experiment frequently depends on the influence of several variables, therefore, one expedite way to solve this problem is utilizing factorial desings. The influence of acid, thickener, sweetener and aroma additives, as well as time of cooking, and possible interactions among some of them, were studied trying to get the best combination of these factors to optimize the sensorial quality of an experimental formulation of dietetic damask marmalade. An orthogonal array L8 (2(7)) was applied in this experience, as well as level average analysis was carried out according Taguchi methodology to determine the suitable working levels of the design factors previously choiced, to achieve a desirable product quality. A sensory trained panel was utilized to analyze the marmalade samples using a composite scoring test with a descriptive acuantitative scale ranging from 1 = Bad, 5 = Good. It was demonstrated that the design factors sugar/aspartame, pectin and damask aroma had a significant effect (p < 0.05) on the sensory quality of the marmalade with 82% of contribution on the response. The optimal combination result to be: citric acid 0.2%; pectin 1%; 30 g sugar/16 mg aspartame/100 g, damask aroma 0.5 ml/100 g, time of cooking 5 minutes. Regarding chemical composition, the most important results turned out to be the decrease in carbohydrate content compaired with traditional marmalade with a reduction of 56% in coloric value and also the amount of dietary fiber greater than similar commercial products. Assays of storage stability were carried out on marmalade samples submitted to different temperatures held in plastic bags of different density. Non percetible sensorial, microbiological and chemical changes were detected after 90 days of storage under controlled conditions.
López-de-Uralde-Villanueva, I; Gil-Martínez, A; Candelas-Fernández, P; de Andrés-Ares, J; Beltrán-Alacreu, H; La Touche, R
2016-12-08
The self-administered Leeds Assessment of Neuropathic Symptoms and Signs (S-LANSS) scale is a tool designed to identify patients with pain with neuropathic features. To assess the validity and reliability of the Spanish-language version of the S-LANSS scale. Our study included a total of 182 patients with chronic pain to assess the convergent and discriminant validity of the S-LANSS; the sample was increased to 321 patients to evaluate construct validity and reliability. The validated Spanish-language version of the ID-Pain questionnaire was used as the criterion variable. All participants completed the ID-Pain, the S-LANSS, and the Numerical Rating Scale for pain. Discriminant validity was evaluated by analysing sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). Construct validity was assessed with factor analysis and by comparing the odds ratio of each S-LANSS item to the total score. Convergent validity and reliability were evaluated with Pearson's r and Cronbach's alpha, respectively. The optimal cut-off point for S-LANSS was ≥12 points (AUC=.89; sensitivity=88.7; specificity=76.6). Factor analysis yielded one factor; furthermore, all items contributed significantly to the positive total score on the S-LANSS (P<.05). The S-LANSS showed a significant correlation with ID-Pain (r=.734, α=.71). The Spanish-language version of the S-LANSS is valid and reliable for identifying patients with chronic pain with neuropathic features. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Jiang, Chun-Sheng; Yang, Mengjin; Zhou, Yuanyuan; To, Bobby; Nanayakkara, Sanjini U.; Luther, Joseph M.; Zhou, Weilie; Berry, Joseph J.; van de Lagemaat, Jao; Padture, Nitin P.; Zhu, Kai; Al-Jassim, Mowafak M.
2015-01-01
Organometal–halide perovskite solar cells have greatly improved in just a few years to a power conversion efficiency exceeding 20%. This technology shows unprecedented promise for terawatt-scale deployment of solar energy because of its low-cost, solution-based processing and earth-abundant materials. We have studied charge separation and transport in perovskite solar cells—which are the fundamental mechanisms of device operation and critical factors for power output—by determining the junction structure across the device using the nanoelectrical characterization technique of Kelvin probe force microscopy. The distribution of electrical potential across both planar and porous devices demonstrates p–n junction structure at the TiO2/perovskite interfaces and minority-carrier diffusion/drift operation of the devices, rather than the operation mechanism of either an excitonic cell or a p-i-n structure. Combining the potential profiling results with solar cell performance parameters measured on optimized and thickened devices, we find that carrier mobility is a main factor that needs to be improved for further gains in efficiency of the perovskite solar cells. PMID:26411597
Jiang, Chun-Sheng; Yang, Mengjin; Zhou, Yuanyuan; ...
2015-09-28
Organometal–halide perovskite solar cells have greatly improved in just a few years to a power conversion efficiency exceeding 20%. This technology shows unprecedented promise for terawatt-scale deployment of solar energy because of its low-cost, solution-based processing and earth-abundant materials. We have studied charge separation and transport in perovskite solar cells—which are the fundamental mechanisms of device operation and critical factors for power output—by determining the junction structure across the device using the nanoelectrical characterization technique of Kelvin probe force microscopy. Moreover, the distribution of electrical potential across both planar and porous devices demonstrates p–n junction structure at the TiO2/perovskite interfacesmore » and minority-carrier diffusion/drift operation of the devices, rather than the operation mechanism of either an excitonic cell or a p-i-n structure. When we combined the potential profiling results with solar cell performance parameters measured on optimized and thickened devices, we find that carrier mobility is a main factor that needs to be improved for further gains in efficiency of the perovskite solar cells.« less
Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.
Jung, Hyung-Sup; Park, Sung-Whan
2014-12-18
Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.
Synchronicity in predictive modelling: a new view of data assimilation
NASA Astrophysics Data System (ADS)
Duane, G. S.; Tribbia, J. J.; Weiss, J. B.
2006-11-01
The problem of data assimilation can be viewed as one of synchronizing two dynamical systems, one representing "truth" and the other representing "model", with a unidirectional flow of information between the two. Synchronization of truth and model defines a general view of data assimilation, as machine perception, that is reminiscent of the Jung-Pauli notion of synchronicity between matter and mind. The dynamical systems paradigm of the synchronization of a pair of loosely coupled chaotic systems is expected to be useful because quasi-2D geophysical fluid models have been shown to synchronize when only medium-scale modes are coupled. The synchronization approach is equivalent to standard approaches based on least-squares optimization, including Kalman filtering, except in highly non-linear regions of state space where observational noise links regimes with qualitatively different dynamics. The synchronization approach is used to calculate covariance inflation factors from parameters describing the bimodality of a one-dimensional system. The factors agree in overall magnitude with those used in operational practice on an ad hoc basis. The calculation is robust against the introduction of stochastic model error arising from unresolved scales.
Li, Jun; Shen, Jinni; Ma, Zuju; Wu, Kechen
2017-08-21
The thermoelectric conversion efficiency of a material relies on a dimensionless parameter (ZT = S 2 σT/κ). It is a great challenge in enhancing the ZT value basically due to that the related transport factors of most of the bulk materials are inter-conditioned to each other, making it very difficult to simultaneously optimize these parameters. In this report, the negative correlation between power factor and thermal conductivity of nano-scaled SnS 2 multilayers is predicted by high-level first-principle computations combined with Boltzmann transport theory. By diminishing the thickness of SnS 2 nanosheet to about 3 L, the S and σ along a direction simultaneously increase whereas κ decreases, achieving a high ZT value of 1.87 at 800 K. The microscopic mechanisms for this unusual negative correlation in nano-scaled two dimensional (2D) material are elucidated and attributed to the quantum confinement effect. The results may open a way to explore the high ZT thermoelectric nano-devices for the practical thermoelectric applications.
Vazquez-Anderson, Jorge; Mihailovic, Mia K.; Baldridge, Kevin C.; Reyes, Kristofer G.; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B.
2017-01-01
Abstract Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA–RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA–RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA–mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. PMID:28334800
Yamamura, S; Momose, Y
2001-01-16
A pattern-fitting procedure for quantitative analysis of crystalline pharmaceuticals in solid dosage forms using X-ray powder diffraction data is described. This method is based on a procedure for pattern-fitting in crystal structure refinement, and observed X-ray scattering intensities were fitted to analytical expressions including some fitting parameters, i.e. scale factor, peak positions, peak widths and degree of preferred orientation of the crystallites. All fitting parameters were optimized by the non-linear least-squares procedure. Then the weight fraction of each component was determined from the optimized scale factors. In the present study, well-crystallized binary systems, zinc oxide-zinc sulfide (ZnO-ZnS) and salicylic acid-benzoic acid (SA-BA), were used as the samples. In analysis of the ZnO-ZnS system, the weight fraction of ZnO or ZnS could be determined quantitatively in the range of 5-95% in the case of both powders and tablets. In analysis of the SA-BA systems, the weight fraction of SA or BA could be determined quantitatively in the range of 20-80% in the case of both powders and tablets. Quantitative analysis applying this pattern-fitting procedure showed better reproducibility than other X-ray methods based on the linear or integral intensities of particular diffraction peaks. Analysis using this pattern-fitting procedure also has the advantage that the preferred orientation of the crystallites in solid dosage forms can be also determined in the course of quantitative analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilary Wheeler; Crystal Densmore
2007-07-31
The diamine reagent 1,2-bis(2-aminophenylthio)ethane is no longer commercially available but still required for the synthesis of the bismaleimide resin, APO-BMI, used in syntactic foams. In this work, we examined the hydrolysis of benzothiazole followed the by reaction with dichloroethane or dibromoethane. We also studied the deprotonation of 2-aminothiophenol followed by the reaction with dibromoethane. We optimized the latter for scale-up by scrutinizing all aspects of the reaction conditions, work-up and recrystallization. On bench-scale, our optimized procedure consistently produced a 75-80% overall yield of finely divided, high purity product (>95%).
NASA Astrophysics Data System (ADS)
Jouvel, S.; Kneib, J.-P.; Bernstein, G.; Ilbert, O.; Jelinsky, P.; Milliard, B.; Ealet, A.; Schimd, C.; Dahlen, T.; Arnouts, S.
2011-08-01
Context. With the discovery of the accelerated expansion of the universe, different observational probes have been proposed to investigate the presence of dark energy, including possible modifications to the gravitation laws by accurately measuring the expansion of the Universe and the growth of structures. We need to optimize the return from future dark energy surveys to obtain the best results from these probes. Aims: A high precision weak-lensing analysis requires not an only accurate measurement of galaxy shapes but also a precise and unbiased measurement of galaxy redshifts. The survey strategy has to be defined following both the photometric redshift and shape measurement accuracy. Methods: We define the key properties of the weak-lensing instrument and compute the effective PSF and the overall throughput and sensitivities. We then investigate the impact of the pixel scale on the sampling of the effective PSF, and place upper limits on the pixel scale. We then define the survey strategy computing the survey area including in particular both the Galactic absorption and Zodiacal light variation accross the sky. Using the Le Phare photometric redshift code and realistic galaxy mock catalog, we investigate the properties of different filter-sets and the importance of the u-band photometry quality to optimize the photometric redshift and the dark energy figure of merit (FoM). Results: Using the predicted photometric redshift quality, simple shape measurement requirements, and a proper sky model, we explore what could be an optimal weak-lensing dark energy mission based on FoM calculation. We find that we can derive the most accurate the photometric redshifts for the bulk of the faint galaxy population when filters have a resolution ℛ ~ 3.2. We show that an optimal mission would survey the sky through eight filters using two cameras (visible and near infrared). Assuming a five-year mission duration, a mirror size of 1.5 m and a 0.5 deg2 FOV with a visible pixel scale of 0.15'', we found that a homogeneous survey reaching a survey population of IAB = 25.6 (10σ) with a sky coverage of ~11 000 deg2 maximizes the weak lensing FoM. The effective number density of galaxies used for WL is then ~45 gal/arcmin2, which is at least a factor of two higher than ground-based surveys. Conclusions: This study demonstrates that a full account of the observational strategy is required to properly optimize the instrument parameters and maximize the FoM of the future weak-lensing space dark energy mission.
NASA Astrophysics Data System (ADS)
Nanus, L.; Geyer, G.; Gurdak, J. J.; Orencio, P. M.; Endo, A.; Taniguchi, M.
2014-12-01
The California Coastal Basin (CCB) aquifers are representative of many coastal aquifers that are vulnerable to nonpoint-source (NPS) contamination from intense agriculture and increased urbanization combined with historical groundwater use and overdraft conditions. Overdraft has led to seawater intrusion along parts of the central California coast, which negatively affects food production because of high salinity concentrations in groundwater used for irrigation. Recent drought conditions in California have led to an increased need to further understand freshwater sustainability and resilience within the water-energy-food (WEF) nexus. Assessing the vulnerability of NPS contamination in groundwater provides valuable information for optimal resource management and policy. Vulnerability models of nitrate contamination in the CCB were developed as one of many indicators to evaluate risk in terms of susceptibility of the physical environment at local and regional scales. Multivariate logistic regression models were developed to predict the probability of NPS nitrate contamination in recently recharged groundwater and to identify significant explanatory variables as controlling factors in the CCB. Different factors were found to be significant in the sub-regions of the CCB and issues of scale are important. For example, land use is scale dependent because of the difference in land management practices between the CCB sub-regions. However, dissolved oxygen concentrations in groundwater, farm fertilizer, and soil thickness are scale invariant because they are significant both regionally and sub-regionally. Thus, the vulnerability models for the CCB show that different explanatory variables are scale invariant. This finding has important implications for accurately quantifying linkages between vulnerability and consequences within the WEF nexus, including inherent tradeoffs in water and food production in California and associated impacts on the local and regional economy, governance, environment, and society at multiple scales.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
Compressive strain induced enhancement in thermoelectric-power-factor in monolayer MoS2 nanosheet
NASA Astrophysics Data System (ADS)
Dimple; Jena, Nityasagar; De Sarkar, Abir
2017-06-01
Strain and temperature induced tunability in the thermoelectric properties in monolayer MoS2 (ML-MoS2) has been demonstrated using density functional theory coupled to semi-classical Boltzmann transport theory. Compressive strain, in general and uniaxial compressive strain (along the zig-zag direction), in particular, is found to be most effective in enhancing the thermoelectric power factor, owing to the higher electronic mobility and its sensitivity to lattice compression along this direction. Variation in the Seebeck coefficient and electronic band gap with strain is found to follow the Goldsmid-Sharp relation. n-type doping is found to raise the relaxation time-scaled thermoelectric power factor higher than p-type doping and this divide widens with increasing temperature. The relaxation time-scaled thermoelectric power factor in optimally n-doped ML-MoS2 is found to undergo maximal enhancement under the application of 3% uniaxial compressive strain along the zig-zag direction, when both the (direct) electronic band gap and the Seebeck coefficient reach their maximum, while the electron mobility drops down drastically from 73.08 to 44.15 cm2 V-1 s-1. Such strain sensitive thermoelectric responses in ML-MoS2 could open doorways for a variety of applications in emerging areas in 2D-thermoelectrics, such as on-chip thermoelectric power generation and waste thermal energy harvesting.
Flight-Test Validation and Flying Qualities Evaluation of a Rotorcraft UAV Flight Control System
NASA Technical Reports Server (NTRS)
Mettler, Bernard; Tuschler, Mark B.; Kanade, Takeo
2000-01-01
This paper presents a process of design and flight-test validation and flying qualities evaluation of a flight control system for a rotorcraft-based unmanned aerial vehicle (RUAV). The keystone of this process is an accurate flight-dynamic model of the aircraft, derived by using system identification modeling. The model captures the most relevant dynamic features of our unmanned rotorcraft, and explicitly accounts for the presence of a stabilizer bar. Using the identified model we were able to determine the performance margins of our original control system and identify limiting factors. The performance limitations were addressed and the attitude control system was 0ptimize.d for different three performance levels: slow, medium, fast. The optimized control laws will be implemented in our RUAV. We will first determine the validity of our control design approach by flight test validating our optimized controllers. Subsequently, we will fly a series of maneuvers with the three optimized controllers to determine the level of flying qualities that can be attained. The outcome enable us to draw important conclusions on the flying qualities requirements for small-scale RUAVs.
Technology-design-manufacturing co-optimization for advanced mobile SoCs
NASA Astrophysics Data System (ADS)
Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey
2014-03-01
How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.
Development and preliminary validation of the physician support of skin self-examination scale.
Coroiu, Adina; Moran, Chelsea; Garland, Rosalind; Körner, Annett
2018-05-01
Skin self-examination (SSE) is a crucial preventive health behaviour in melanoma survivors, as it facilitates early detection. Physician endorsement of SSE is important for the initiation and maintenance of this behaviour. This study focussed on the preliminary validation of a new nine-item measure assessing physician support of SSE in melanoma patients. English and French versions of this measure were administered to 188 patients diagnosed with melanoma in the context of a longitudinal study investigating predictors and facilitators of SSE. Structural validity was investigated using exploratory factor analysis conducted in Mplus and convergent and divergent validity was assessed using bivariate correlations conducted in spss. Results suggest that the scale is a unidimensional and reliable measure of physician support for SSE. Given the uncertainty regarding the optimal frequency of SSE for at-risk individuals, we recommend that future psychometric evaluations of this scale consider tailoring items according to the most up-to-date research on SSE effectiveness.
Kim, Bongkyu; An, Junyeong; Fapyane, Deby; Chang, In Seop
2015-11-01
The current trend of bio-electrochemical systems is to improve strategies related to their applicability and potential for scaling-up. To date, literature has suggested strategies, but the proposal of correlations between each research field remains insufficient. This review paper provides a correlation based on platform techniques, referred to as bio-electronics platforms (BEPs). These BEPs consist of three platforms divided by scope scale: nano-, micro-, and macro-BEPs. In the nano-BEP, several types of electron transfer mechanisms used by electrochemically active bacteria are discussed. In the micro-BEP, factors affecting the formation of conductive biofilms and transport of electrons in the conductive biofilm are investigated. In the macro-BEP, electrodes and separators in bio-anode are debated in terms of real applications, and a scale-up strategy is discussed. Overall, the challenges of each BEP are highlighted, and potential solutions are suggested. In addition, future research directions are provided and research ideas proposed to develop research interest. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rasch-modeling the Portuguese SOCRATES in a clinical sample.
Lopes, Paulo; Prieto, Gerardo; Delgado, Ana R; Gamito, Pedro; Trigo, Hélder
2010-06-01
The Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES) assesses motivation for treatment in the drug-dependent population. The development of adequate measures of motivation is needed in order to properly understand the role of this construct in rehabilitation. This study probed the psychometric properties of the SOCRATES in the Portuguese population by means of the Rasch Rating Scale Model, which allows the conjoint measurement of items and persons. The participants were 166 substance abusers under treatment for their addiction. Results show that the functioning of the five response categories is not optimal; our re-analysis indicates that a three-category system is the most appropriate one. By using this response category system, both model fit and estimation accuracy are improved. The discussion takes into account other factors such as item format and content in order to make suggestions for the development of better motivation-for-treatment scales. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Optimal Scaling of Interaction Effects in Generalized Linear Models
ERIC Educational Resources Information Center
van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.
2009-01-01
Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…
Academic Optimism: An Individual Teacher Belief
ERIC Educational Resources Information Center
Ngidi, David P.
2012-01-01
In this study, academic optimism as an individual teacher belief was investigated. Teachers' self-efficacy beliefs were measured using the short form of the Teacher Sense of Efficacy Scale. One subtest from the Omnibus T-Scale, the faculty trust in clients subtest, was used to measure teachers' trust in students and parents. One subtest from the…
Coronado, Rogelio A; Simon, Corey B; Lentz, Trevor A; Gay, Charles W; Mackie, Lauren N; George, Steven Z
2017-01-01
Study Design Secondary analysis of prospectively collected data. Background An abundance of evidence has highlighted the influence of pain catastrophizing and fear avoidance on clinical outcomes. Less is known about the interaction of positive psychological resources with these pain-associated distress factors. Objective To assess whether optimism moderates the influence of pain catastrophizing and fear avoidance on 3-month clinical outcomes in patients with shoulder pain. Methods Data from 63 individuals with shoulder pain (mean ± SD age, 38.8 ± 14.9 years; 30 female) were examined. Demographic, psychological, and clinical characteristics were obtained at baseline. Validated measures were used to assess optimism (Life Orientation Test-Revised), pain catastrophizing (Pain Catastrophizing Scale), fear avoidance (Fear-Avoidance Beliefs Questionnaire physical activity subscale), shoulder pain intensity (Brief Pain Inventory), and shoulder function (Pennsylvania Shoulder Score function subscale). Shoulder pain and function were reassessed at 3 months. Regression models assessed the influence of (1) pain catastrophizing and optimism and (2) fear avoidance and optimism. The final multivariable models controlled for factors of age, sex, education, and baseline scores, and included 3-month pain intensity and function as separate dependent variables. Results Shoulder pain (mean difference, -1.6; 95% confidence interval [CI]: -2.1, -1.2) and function (mean difference, 2.4; 95% CI: 0.3, 4.4) improved over 3 months. In multivariable analyses, there was an interaction between pain catastrophizing and optimism (β = 0.19; 95% CI: 0.02, 0.35) for predicting 3-month shoulder function (F = 16.8, R 2 = 0.69, P<.001), but not pain (P = .213). Further examination of the interaction with the Johnson-Neyman technique showed that higher levels of optimism lessened the influence of pain catastrophizing on function. There was no evidence of significant moderation of fear-avoidance beliefs for 3-month shoulder pain (P = .090) or function (P = .092). Conclusion Optimism decreased the negative influence of pain catastrophizing on shoulder function, but not pain intensity. Optimism did not alter the influence of fear-avoidance beliefs on these outcomes. Level of Evidence Prognosis, level 2b. J Orthop Sports Phys Ther 2017;47(1):21-30. Epub 5 Nov 2016. doi:10.2519/jospt.2017.7068.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
Self-esteem and optimism in rural youth: gender differences.
Puskar, Kathryn R; Bernardo, Lisa Marie; Ren, Dianxu; Haley, Tammy M; Tark, Kirsti Hetager; Switala, Joann; Siemon, Linda
2010-01-01
To identify and describe gender-related differences in the self-esteem and optimism levels of rural adolescents. Self-esteem and optimism have been broadly examined and are associated with health-practices, social interaction, attachment, resiliency, and personal identity. Information describing the relationship of self-esteem and optimism as it relates to gender is limited. Using a cross-sectional survey design, students (N = 193) from three high-schools in rural Pennsylvania, USA completed the Rosenberg Self-Esteem Scale and the Optimism Scale-Life Orientation Test-Revised as part of a National Institute of Health, National Institute of Nursing Research funded study. Both instruments' mean scores were in the range of average for this population, with females scoring lower than males in both self-esteem (p < 0.0001) and optimism (p < 0.0001). The results of this study have nursing implications for evidenced based interventions that target self-esteem and optimism. Attention to self-esteem and optimism in female youth is recommended.
Effect of perceived social support and dispositional optimism on the depression of burn patients.
He, Fei; Zhou, Qin; Zhao, Zhijing; Zhang, Yuan; Guan, Hao
2016-06-01
Burn wounds have a significant impact on the mental health of patients. This study aimed to investigate the impact of perceived social support and dispositional optimism on depression of burn patients. A total of 246 burn patients accomplished the Multidimensional Scale of Perceived Social Support, the Revised Life Orientation Test, and Depression Scale. The results revealed that both perceived social support and optimism were significantly correlated with depression. Structural equation modeling indicated that optimism partially mediated the relationship between perceived social support and depression. Implications for prevention of depression in burn patients were discussed. © The Author(s) 2014.
Happell, Brenda; Koehn, Stefan
2011-06-01
This paper is a report of the study of nurses' attitudes to the use of seclusion. More specifically, the aim was to address the relationship between burnout, job satisfaction and therapeutic optimism and justification of the use of seclusion. Research findings demonstrate that nurses continue to view seclusion as a necessary intervention. Factors that might be associated with attitudes have not been examined. Questionnaires were distributed to nurses employed in inpatient units across eight mental health services in Queensland in 2008. Heyman Attitudes to Seclusion Survey, Elsom Therapeutic Optimism Scale, Maslach's Burnout Inventory and Minnesota Satisfaction Questionnaires were completed (N = 123). Data analysis involved descriptive statistics and Pearson product-moment correlation coefficients. Most participants considered certain behaviours particularly those involving harm to self, others or to property as appropriate reasons for the use of seclusion and were consistent with their perceptions of the likely practice on their unit. An association was found between therapeutic optimism and emotional exhaustion (burnout) and justifications for the use of seclusion. Participants with higher optimism scores and lower scores for emotional exhaustion were significantly less likely to support the use of seclusion in specific situations. The relationship between therapeutic optimism and emotional exhaustion gives new information that might influence strategies and approaches taken with the aim of reducing seclusion use. Further research is warranted to explore these relationships and their implications. © 2011 Blackwell Publishing Ltd.
Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media
Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.
2000-01-01
To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.
NASA Astrophysics Data System (ADS)
Manzoni, S.; Capek, P.; Mooshammer, M.; Lindahl, B.; Richter, A.; Santruckova, H.
2016-12-01
Litter and soil organic matter decomposers feed on substrates with much wider C:N and C:P ratios then their own cellular composition, raising the question as to how they can adapt their metabolism to such a chronic stoichiometric imbalance. Here we propose an optimality framework to address this question, based on the hypothesis that carbon-use efficiency (CUE) can be optimally adjusted to maximize the decomposer growth rate. When nutrients are abundant, increasing CUE improves decomposer growth rate, at the expense of higher nutrient demand. However, when nutrients are scarce, increased nutrient demand driven by high CUE can trigger nutrient limitation and inhibit growth. An intermediate, `optimal' CUE ensures balanced growth at the verge of nutrient limitation. We derive a simple analytical equation that links this optimal CUE to organic substrate and decomposer biomass C:N and C:P ratios, and to the rate of inorganic nutrient supply (e.g., fertilization). This equation allows formulating two specific hypotheses: i) decomposer CUE should increase with widening organic substrate C:N and C:P ratios with a scaling exponent between 0 (with abundant inorganic nutrients) and -1 (scarce inorganic nutrients), and ii) CUE should increase with increasing inorganic nutrient supply, for a given organic substrate stoichiometry. These hypotheses are tested using a new database encompassing nearly 2000 estimates of CUE from about 160 studies, spanning aquatic and terrestrial decomposers of litter and more stabilized organic matter. The theoretical predictions are largely confirmed by our data analysis, except for the lack of fertilization effects on terrestrial decomposer CUE. While stoichiometric drivers constrain the general trends in CUE, the relatively large variability in CUE estimates suggests that other factors could be at play as well. For example, temperature is often cited as a potential driver of CUE, but we only found limited evidence of temperature effects, although in some subsets of data, temperature and substrate stoichiometry appeared to interact. Based on our results, the optimality principle can provide a solid (but still incomplete) framework to develop CUE models for large-scale applications.
Probabilistic Analysis and Design of a Raked Wing Tip for a Commercial Transport
NASA Technical Reports Server (NTRS)
Mason Brian H.; Chen, Tzi-Kang; Padula, Sharon L.; Ransom, Jonathan B.; Stroud, W. Jefferson
2008-01-01
An approach for conducting reliability-based design and optimization (RBDO) of a Boeing 767 raked wing tip (RWT) is presented. The goal is to evaluate the benefits of RBDO for design of an aircraft substructure. A finite-element (FE) model that includes eight critical static load cases is used to evaluate the response of the wing tip. Thirteen design variables that describe the thickness of the composite skins and stiffeners are selected to minimize the weight of the wing tip. A strain-based margin of safety is used to evaluate the performance of the structure. The randomness in the load scale factor and in the strain limits is considered. Of the 13 variables, the wing-tip design was controlled primarily by the thickness of the thickest plies in the upper skins. The report includes an analysis of the optimization results and recommendations for future reliability-based studies.
NASA Astrophysics Data System (ADS)
Goldston, Robert; Brooks, Jeffrey; Hubbard, Amanda; Leonard, Anthony; Lipschultz, Bruce; Maingi, Rajesh; Ulrickson, Michael; Whyte, Dennis
2009-11-01
The plasma facing components in a Demo reactor will face much more extreme boundary plasma conditions and operating requirements than any present or planned experiment. These include 1) Power density a factor of four or more greater than in ITER, 2) Continuous operation resulting in annual energy and particle throughput 100-200 times larger than ITER, 3) Elevated surface operating temperature for efficient electricity production, 4) Tritium fuel cycle control for safety and breeding requirements, and 5) Steady state plasma confinement and control. Consistent with ReNeW Thrust 12, design options are being explored for a new moderate-scale facility to assess core-edge interaction issues and solutions. Key desired features include high power density, sufficient pulse length and duty cycle, elevated wall temperature, steady-state control of an optimized core plasma, and flexibility in changing boundary components as well as access for comprehensive measurements.
A model for optimizing file access patterns using spatio-temporal parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boonthanome, Nouanesengsy; Patchett, John; Geveci, Berk
2013-01-01
For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible filemore » access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.« less
A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary
NASA Astrophysics Data System (ADS)
Gillis, Nicolas; Luce, Robert
2018-01-01
A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.
Soil organic carbon across scales.
O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B
2015-10-01
Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Neophytou, Neophytos
2015-04-01
Silicon based low-dimensional materials receive significant attention as new generation thermoelectric materials after they have demonstrated record low thermal conductivities. Very few works to-date, however, report significant advances with regards to the power factor. In this review we examine possibilities of power factor enhancement in: (i) low-dimensional Si channels and (ii) nanocrystalline Si materials. For low-dimensional channels we use atomistic simulations and consider ultra-narrow Si nanowires and ultra-thin Si layers of feature sizes below 15 nm. Room temperature is exclusively considered. We show that, in general, low-dimensionality does not offer possibilities for power factor improvement, because although the Seebeck coefficient could slightly increase, the conductivity inevitably degrades at a much larger extend. The power factor in these channels, however, can be optimized by proper choice of geometrical parameters such as the transport orientation, confinement orientation, and confinement length scale. Our simulations show that in the case where room temperature thermal conductivities as low as κ l = 2 W/mK are achieved, the ZT figure of merit of an optimized Si low-dimensional channel could reach values around unity. For the second case of materials, we show that by making effective use of energy filtering, and taking advantage of the inhomogeneity within the nanocrystalline geometry, the underlying potential profile and dopant distribution large improvements in the thermoelectric power factor can be achieved. The paper is intended to be a review of the main findings with regards to the thermoelectric performance of nanoscale Si through our simulation work as well as through recent experimental observations.
Hoofwijk, Daisy M N; Fiddelers, Audrey A A; Peters, Madelon L; Stessel, Björn; Kessels, Alfons G H; Joosten, Elbert A; Gramke, Hans-Fritz; Marcus, Marco A E
2015-12-01
To prospectively describe the prevalence and predictive factors of chronic postsurgical pain (CPSP) and poor global recovery in a large outpatient population at a university hospital, 1 year after outpatient surgery. A prospective longitudinal cohort study was performed. During 18 months, patients presenting for preoperative assessment were invited to participate. Outcome parameters were measured by using questionnaires at 3 timepoints: 1 week preoperatively, 4 days postoperatively, and 1 year postoperatively. A value of >3 on an 11-point numeric rating scale was considered to indicate moderate to severe pain. A score of ≤80% on the Global Surgical Recovery Index was defined as poor global recovery. A total of 908 patients were included. The prevalence of moderate to severe preoperative pain was 37.7%, acute postsurgical pain 26.7%, and CPSP 15.3%. Risk factors for the development of CPSP were surgical specialty, preoperative pain, preoperative analgesic use, acute postoperative pain, surgical fear, lack of optimism, and poor preoperative quality of life. The prevalence of poor global recovery was 22.3%. Risk factors for poor global recovery were recurrent surgery because of the same pathology, preoperative pain, preoperative analgesic use, surgical fear, lack of optimism, poor preoperative and acute postoperative quality of life, and follow-up surgery during the first postoperative year. Moderate to severe CPSP after outpatient surgery is common, and should not be underestimated. Patients at risk for developing CPSP can be identified during the preoperative phase.
Gradient design for liquid chromatography using multi-scale optimization.
López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C
2018-01-26
In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimizing weather radar observations using an adaptive multiquadric surface fitting algorithm
NASA Astrophysics Data System (ADS)
Martens, Brecht; Cabus, Pieter; De Jongh, Inge; Verhoest, Niko
2013-04-01
Real time forecasting of river flow is an essential tool in operational water management. Such real time modelling systems require well calibrated models which can make use of spatially distributed rainfall observations. Weather radars provide spatial data, however, since radar measurements are sensitive to a large range of error sources, often a discrepancy between radar observations and ground-based measurements, which are mostly considered as ground truth, can be observed. Through merging ground observations with the radar product, often referred to as data merging, one may force the radar observations to better correspond to the ground-based measurements, without losing the spatial information. In this paper, radar images and ground-based measurements of rainfall are merged based on interpolated gauge-adjustment factors (Moore et al., 1998; Cole and Moore, 2008) or scaling factors. Using the following equation, scaling factors (C(xα)) are calculated at each position xα where a gauge measurement (Ig(xα)) is available: Ig(xα)+-? C (xα) = Ir(xα)+ ? (1) where Ir(xα) is the radar-based observation in the pixel overlapping the rain gauge and ? is a constant making sure the scaling factor can be calculated when Ir(xα) is zero. These scaling factors are interpolated on the radar grid, resulting in a unique scaling factor for each pixel. Multiquadric surface fitting is used as an interpolation algorithm (Hardy, 1971): C*(x0) = aTv + a0 (2) where C*(x0) is the prediction at location x0, the vector a (Nx1, with N the number of ground-based measurements used) and the constant a0 parameters describing the surface and v an Nx1 vector containing the (Euclidian) distance between each point xα used in the interpolation and the point x0. The parameters describing the surface are derived by forcing the surface to be an exact interpolator and impose that the sum of the parameters in a should be zero. However, often, the surface is allowed to pass near the observations (i.e. the observed scaling factors C(xα)) on a distance aαK by introducing an offset parameter K, which results in slightly different equations to calculate a and a0. The described technique is currently being used by the Flemish Environmental Agency in an online forecasting system of river discharges within Flanders (Belgium). However, rescaling the radar data using the described algorithm is not always giving rise to an improved weather radar product. Probably one of the main reasons is the parameters K and ? which are implemented as constants. It can be expected that, among others, depending on the characteristics of the rainfall, different values for the parameters should be used. Adaptation of the parameter values is achieved by an online calibration of K and ? at each time step (every 15 minutes), using validated rain gauge measurements as ground truth. Results demonstrate that rescaling radar images using optimized values for K and ? at each time step lead to a significant improvement of the rainfall estimation, which in turn will result in higher quality discharge predictions. Moreover, it is shown that calibrated values for K and ? can be obtained in near-real time. References Cole, S. J., and Moore, R. J. (2008). Hydrological modelling using raingauge- and radar-based estimators of areal rainfall. Journal of Hydrology, 358(3-4), 159-181. Hardy, R.L., (1971) Multiquadric equations of topography and other irregular surfaces, Journal of Geophysical Research, 76(8): 1905-1915. Moore, R. J., Watson, B. C., Jones, D. A. and Black, K. B. (1989). London weather radar local calibration study. Technical report, Institute of Hydrology.
Optimal Energy Management for Microgrids
NASA Astrophysics Data System (ADS)
Zhao, Zheng
Microgrid is a recent novel concept in part of the development of smart grid. A microgrid is a low voltage and small scale network containing both distributed energy resources (DERs) and load demands. Clean energy is encouraged to be used in a microgrid for economic and sustainable reasons. A microgrid can have two operational modes, the stand-alone mode and grid-connected mode. In this research, a day-ahead optimal energy management for a microgrid under both operational modes is studied. The objective of the optimization model is to minimize fuel cost, improve energy utilization efficiency and reduce gas emissions by scheduling generations of DERs in each hour on the next day. Considering the dynamic performance of battery as Energy Storage System (ESS), the model is featured as a multi-objectives and multi-parametric programming constrained by dynamic programming, which is proposed to be solved by using the Advanced Dynamic Programming (ADP) method. Then, factors influencing the battery life are studied and included in the model in order to obtain an optimal usage pattern of battery and reduce the correlated cost. Moreover, since wind and solar generation is a stochastic process affected by weather changes, the proposed optimization model is performed hourly to track the weather changes. Simulation results are compared with the day-ahead energy management model. At last, conclusions are presented and future research in microgrid energy management is discussed.
Quality of Life in Chinese Persons Living With an Ostomy: A Multisite Cross-sectional Study.
Geng, Zhaohui; Howell, Doris; Xu, Honglian; Yuan, Changrong
The aim of the study was to describe health-related quality of life (HRQOL) in persons with ostomies and to explore influencing factors. Secondary analysis of data from a cross-sectional survey. Eight hundred twenty-seven persons living with an ostomy were enrolled from 5 provinces and cities in China from October 2010 to November 2012; the final sample comprises 729 individuals who completed data collection. Their mean ± SD age was 62.59 ± 12.40 years (range 26-93 years). Health-related quality of life was assessed using the Chinese language version of the City of Hope-Quality of Life-Ostomy Questionnaire-Chinese Version. Sociodemographic data, clinical characteristics, self-efficacy, adjustment to an ostomy, social support, and psychological state of patients were measured by a general information questionnaire. We also administered the Stoma Self-Efficacy Scale, Ostomy Adjustment Inventory-Chinese Version, the Social Support Revalued Scale, and Hospital Anxiety Depression Scale. Of the 729 ostomy patients, the overall HRQOL in ostomy patients was in the moderate range (mean score 5.19 ± 1.29); scores of physical domain, psychological domain, social domain, and spiritual domains also in the moderate range (5.00 ± 1.73, 5.97 ± 1.59, 4.86 ± 2.31, and 4.93 ± 2.08 respectively). Multivariate analysis found that multiple factors influenced HRQOL in persons with an ostomy; they were gender, religious belief, and marital status, psychological factors depression and anxiety, and specific components related to social support, self-efficacy in ostomy care, and adjustment to an ostomy. Health-related quality of life among Chinese patients with fecal ostomies was less than optimal and influenced by multiple demographic and psychosocial factors. Additional research is needed to design strategies to improve HRQOL in this population.
Bieda, Angela; Hirschfeld, Gerrit; Schönfeld, Pia; Brailovskaia, Julia; Zhang, Xiao Chi; Margraf, Jürgen
2017-04-01
Research into positive aspects of the psyche is growing as psychologists learn more about the protective role of positive processes in the development and course of mental disorders, and about their substantial role in promoting mental health. With increasing globalization, there is strong interest in studies examining positive constructs across cultures. To obtain valid cross-cultural comparisons, measurement invariance for the scales assessing positive constructs has to be established. The current study aims to assess the cross-cultural measurement invariance of questionnaires for 6 positive constructs: Social Support (Fydrich, Sommer, Tydecks, & Brähler, 2009), Happiness (Subjective Happiness Scale; Lyubomirsky & Lepper, 1999), Life Satisfaction (Diener, Emmons, Larsen, & Griffin, 1985), Positive Mental Health Scale (Lukat, Margraf, Lutz, van der Veld, & Becker, 2016), Optimism (revised Life Orientation Test [LOT-R]; Scheier, Carver, & Bridges, 1994) and Resilience (Schumacher, Leppert, Gunzelmann, Strauss, & Brähler, 2004). Participants included German (n = 4,453), Russian (n = 3,806), and Chinese (n = 12,524) university students. Confirmatory factor analyses and measurement invariance testing demonstrated at least partial strong measurement invariance for all scales except the LOT-R and Subjective Happiness Scale. The latent mean comparisons of the constructs indicated differences between national groups. Potential methodological and cultural explanations for the intergroup differences are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units
Song, Chenchen; Martinez, Todd J.
2017-08-29
Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. Furthermore, the resulting energy conservation in micro-canonical AIMD demonstrates that the implementationmore » provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.« less
NASA Astrophysics Data System (ADS)
Cui, Shuya; Wang, Tao; Hu, Xiaoli
2014-12-01
A new chiral ionic liquid was synthesized from (S)-1-phenylethylamine and it was studied by IR, Raman, polarimetry, NMR and X-ray crystal diffraction. Its vibrational spectral bands are precisely ascribed to the studied structure with the aid of DFT theoretical calculations. The optimized geometries and calculated vibrational frequencies are evaluated via comparison with experimental values. The vibrational spectral data obtained from IR and Raman spectra are assigned based on the results of the theoretical calculations by the DFT-B3LYP method at 6-311G(d,p) level. The computed vibrational frequencies were scaled by scale factors to yield a good agreement with observed experimental vibrational frequencies. The vibrational modes assignments were performed by using the animation option of GaussView5.0 graphical interface for Gaussian program.
Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units
NASA Astrophysics Data System (ADS)
Song, Chenchen; Martínez, Todd J.
2017-10-01
Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.
Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Chenchen; Martinez, Todd J.
Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. Furthermore, the resulting energy conservation in micro-canonical AIMD demonstrates that the implementationmore » provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.« less
A post-processing system for automated rectification and registration of spaceborne SAR imagery
NASA Technical Reports Server (NTRS)
Curlander, John C.; Kwok, Ronald; Pang, Shirley S.
1987-01-01
An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.
Allen, James; Fok, Carlotta Ching Ting; Henry, David; Skewes, Monica
2012-09-01
Concerns in some settings regarding the accuracy and ethics of employing direct questions about alcohol use suggest need for alternative assessment approaches with youth. Umyuangcaryaraq is a Yup'ik Alaska Native word meaning "Reflecting." The Reflective Processes Scale was developed as a youth measure tapping awareness and thinking over potential negative consequences of alcohol misuse as a protective factor that includes cultural elements often shared by many other Alaska Native and American Indian cultures. This study assessed multidimensional structure, item functioning, and validity. Responses from 284 rural Alaska Native youth allowed bifactor analysis to assess structure, estimates of location and discrimination parameters, and convergent and discriminant validity. A bifactor model of the scale items with three content factors provided excellent fit to observed data. Item response theory analysis suggested a binary response format as optimal. Evidence of convergent and discriminant validity was established. The measure provides an assessment of reflective processes about alcohol that Alaska Native youth engage in when thinking about reasons not to drink. The concept of reflective processes has potential to extend understandings of cultural variation in mindfulness, alcohol expectancies research, and culturally mediated protective factors in Alaska Native and American Indian youth.
Kim, Jinhyun; Jung, Yoomi
2009-08-01
This paper analyzed alternative methods of calculating the conversion factor for nurse-midwife's delivery services in the national health insurance and estimated the optimal reimbursement level for the services. A cost accounting model and Sustainable Growth Rate (SGR) model were developed to estimate the conversion factor of Resource-Based Relative Value Scale (RBRVS) for nurse-midwife's services, depending on the scope of revenue considered in financial analysis. The data and sources from the government and the financial statements from nurse-midwife clinics were used in analysis. The cost accounting model and SGR model showed a 17.6-37.9% increase and 19.0-23.6% increase, respectively, in nurse-midwife fee for delivery services in the national health insurance. The SGR model measured an overall trend of medical expenditures rather than an individual financial status of nurse-midwife clinics, and the cost analysis properly estimated the level of reimbursement for nurse-midwife's services. Normal vaginal delivery in nurse-midwife clinics is considered cost-effective in terms of insurance financing. Upon a declining share of health expenditures on midwife clinics, designing a reimbursement strategy for midwife's services could be an opportunity as well as a challenge when it comes to efficient resource allocation.
Synaptic heterogeneity and stimulus-induced modulation of depression in central synapses.
Hunter, J D; Milton, J G
2001-08-01
Short-term plasticity is a pervasive feature of synapses. Synapses exhibit many forms of plasticity operating over a range of time scales. We develop an optimization method that allows rapid characterization of synapses with multiple time scales of facilitation and depression. Investigation of paired neurons that are postsynaptic to the same identified interneuron in the buccal ganglion of Aplysia reveals that the responses of the two neurons differ in the magnitude of synaptic depression. Also, for single neurons, prolonged stimulation of the presynaptic neuron causes stimulus-induced increases in the early phase of synaptic depression. These observations can be described by a model that incorporates two availability factors, e.g., depletable vesicle pools or desensitizing receptor populations, with different time courses of recovery, and a single facilitation component. This model accurately predicts the responses to novel stimuli. The source of synaptic heterogeneity is identified with variations in the relative sizes of the two availability factors, and the stimulus-induced decrement in the early synaptic response is explained by a slowing of the recovery rate of one of the availability factors. The synaptic heterogeneity and stimulus-induced modifications in synaptic depression observed here emphasize that synaptic efficacy depends on both the individual properties of synapses and their past history.
Ozcan, Didem Sezgin; Koseoglu, Belma Fusun; Balci, Kevser Gulcihan; Polat, Cemile Sevgi; Ozcan, Ozgur Ulas; Balci, Mustafa Mucahit; Aydoğdu, Sinan
2018-05-21
In patients diagnosed with coronary artery disease (CAD), we aimed to determine the characteristics and risk factors of co-occurring musculoskeletal pain and examine its effects on functional capacity, psychological status and health-related quality of life. A total of 100 patients with (n= 50) and without (n= 50) musculoskeletal pain were enrolled. All patients were assessed on sociodemographic and clinical properties. The Duke Activity Status Index (DASI), the Hospital Anxiety and Depression Scale (HADS) and the Short Form-36 (SF-36) were applied as clinical assessment scales. Patients with musculoskeletal pain were mostly female, and had a lower education level and annual income. The pain was mostly nociceptive, intermittent, sharp/stabbing in character, and located in the chest and spine. Having musculoskeletal pain resulted in lower levels on the DASI and all subgroups of the SF-36, and higher levels on the HADS. Female gender, lower education level and severity of emotional distress proved to be independent risk factors for the development of musculoskeletal pain. In CAD, the co-occurrence of musculoskeletal pain leads to a further decrease in health-related quality of life and functional status, and increased severity of anxiety and depression. This stresses the importance of the detection and optimal treatment of musculoskeletal pain in patients diagnosed with CAD.
Yurek, Leo A; Havens, Donna S; Hays, Spencer; Hughes, Linda C
2015-10-01
Decisional involvement is widely recognized as an essential component of a professional nursing practice environment. In recent years, researchers have added to the conceptualization of nurses' role in decision-making to differentiate between the content and context of nursing practice. Yet, instruments that clearly distinguish between these two dimensions of practice are lacking. The purpose of this study was to examine the factorial validity of the Decisional Involvement Scale (DIS) as a measure of both the content and context of nursing practice. This secondary analysis was conducted using data from a longitudinal action research project to improve the quality of nursing practice and patient care in six hospitals (N = 1,034) in medically underserved counties of Pennsylvania. A cross-sectional analysis of baseline data from the parent study was used to compare the factor structure of two models (one nested within the other) using confirmatory factor analysis. Although a comparison of the two models indicated that the addition of second-order factors for the content and context of nursing practice improved model fit, neither model provided optimal fit to the data. Additional model-generating research is needed to develop the DIS as a valid measure of decisional involvement for both the content and context of nursing practice. © 2015 Wiley Periodicals, Inc.
Presence and process of fear of birth during pregnancy-Findings from a longitudinal cohort study.
Hildingsson, Ingegerd; Haines, Helen; Karlström, Annika; Nystedt, Astrid
2017-10-01
The prevalence of fear of birth has been estimated between 8-30%, but there is considerable heterogeneity in research design, definitions, measurement tools used and populations. There are some inconclusive findings about the stability of childbirth fear. to assess the prevalence and characteristics of women presenting with scores ≥60 on FOBS-The Fear of Birth Scale, in mid and late pregnancy, and to study change in fear of birth and associated factors. A prospective longitudinal cohort study of a one-year cohort of 1212 pregnant women from a northern part of Sweden, recruited in mid pregnancy and followed up in late pregnancy. Fear of birth was assessed using FOBS-The fear of birth scale, with the cut off at ≥60. The prevalence of fear of birth was 22% in mid pregnancy and 19% in late pregnancy, a statistically significant decrease. Different patterns were found where some women presented with increased fear and some with decreased fear. The women who experienced more fear or less fear later in pregnancy could not be differentiated by background factors. More research is needed to explore factors important to reduce fear of childbirth and the optimal time to measure it. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Microalgal production--a close look at the economics.
Norsker, Niels-Henrik; Barbosa, Maria J; Vermuë, Marian H; Wijffels, René H
2011-01-01
Worldwide, microalgal biofuel production is being investigated. It is strongly debated which type of production technology is the most adequate. Microalgal biomass production costs were calculated for 3 different micro algal production systems operating at commercial scale today: open ponds, horizontal tubular photobioreactors and flat panel photobioreactors. For the 3 systems, resulting biomass production costs including dewatering, were 4.95, 4.15 and 5.96 € per kg, respectively. The important cost factors are irradiation conditions, mixing, photosynthetic efficiency of systems, medium- and carbon dioxide costs. Optimizing production with respect to these factors, a price of € 0.68 per kg resulted. At this cost level microalgae become a promising feedstock for biodiesel and bulk chemicals. Photobioreactors may become attractive for microalgal biofuel production. Copyright © 2010 Elsevier Inc. All rights reserved.
Maximum plant height and the biophysical factors that limit it.
Niklas, Karl J
2007-03-01
Basic engineering theory and empirically determined allometric relationships for the biomass partitioning patterns of extant tree-sized plants show that the mechanical requirements for vertical growth do not impose intrinsic limits on the maximum heights that can be reached by species with woody, self-supporting stems. This implies that maximum tree height is constrained by other factors, among which hydraulic constraints are plausible. A review of the available information on scaling relationships observed for large tree-sized plants, nevertheless, indicates that mechanical and hydraulic requirements impose dual restraints on plant height and thus, may play equally (but differentially) important roles during the growth of arborescent, large-sized species. It may be the case that adaptations to mechanical and hydraulic phenomena have optimized growth, survival and reproductive success rather than longevity and mature size.
Sethi, Gaurav; Saini, B S
2015-12-01
This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.
RenNanqi; GuoWanqian; LiuBingfeng; CaoGuangli; DingJie
2011-06-01
Among different technologies of hydrogen production, bio-hydrogen production exhibits perhaps the greatest potential to replace fossil fuels. Based on recent research on dark fermentative hydrogen production, this article reviews the following aspects towards scaled-up application of this technology: bioreactor development and parameter optimization, process modeling and simulation, exploitation of cheaper raw materials and combining dark-fermentation with photo-fermentation. Bioreactors are necessary for dark-fermentation hydrogen production, so the design of reactor type and optimization of parameters are essential. Process modeling and simulation can help engineers design and optimize large-scale systems and operations. Use of cheaper raw materials will surely accelerate the pace of scaled-up production of biological hydrogen. And finally, combining dark-fermentation with photo-fermentation holds considerable promise, and has successfully achieved maximum overall hydrogen yield from a single substrate. Future development of bio-hydrogen production will also be discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
da Silva Ferreira, Veronica; Sant'Anna, Celso
2017-01-01
Chlorophyll is a commercially important natural green pigment responsible for the absorption of light energy and its conversion into chemical energy via photosynthesis in plants and algae. This bioactive compound is widely used in the food, cosmetic, and pharmaceutical industries. Chlorophyll has been consumed for health benefits as a nutraceutical agent with antioxidant, anti-inflammatory, antimutagenic, and antimicrobial properties. Microalgae are photosynthesizing microorganisms which can be extracted for several high-value bioproducts in the biotechnology industry. These microorganisms are highly efficient at adapting to physicochemical variations in the local environment. This allows optimization of culture conditions for inducing microalgal growth and biomass production as well as for changing their biochemical composition. The modulation of microalgal culture under controlled conditions has been proposed to maximize chlorophyll accumulation. Strategies reported in the literature to promote the chlorophyll content in microalgae include variation in light intensity, culture agitation, and changes in temperature and nutrient availability. These factors affect chlorophyll concentration in a species-specific manner; therefore, optimization of culture conditions has become an essential requirement. This paper provides an overview of the current knowledge on the effects of key environmental factors on microalgal chlorophyll accumulation, focusing on small-scale laboratory experiments.
Mache, Stefanie; Vitzthum, Karin; Wanke, Eileen; Klapp, Burghard F; Danzer, Gerhard
2014-01-01
The German health care system has undergone radical changes in the last decades. These days health care professionals have to face economic demands, high performance pressure as well as high expectations from patients. To ensure high quality medicine and care, highly intrinsic motivated and work engaged health care professionals are strongly needed. The aim of this study was to examine relations between personal and organizational resources as essential predictors for work engagement of German health care professionals. This investigation has a cross-sectional questionnaire study design. Participants were a sample of hospital doctors. Personal strengths, working conditions and work engagement were measured by using the SWOPE-K9, COPE Brief Questionnaire, Perceived Stress Questionnaire, COPSOQ and Utrecht Work Engagement Scale. Significant relations between physicians' personal strengths (e.g. resilience, optimism) and work engagement were evaluated. Work related factors showed to have a significant influence on work engagement. Differences in work engagement were also found with regard to socio-demographic variables. Results demonstrated important relationships between personal and organizational resources and work engagement. Health care management needs to use this information to maintain or develop work engaging job conditions in hospitals as one key factor to ensure quality health care service.
Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong
2017-01-01
Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro–meso–scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy–enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy–enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates. PMID:28869520
Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong
2017-09-03
Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro-meso-scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy-enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy-enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates.
Hemingway, Steve; Rogers, Melanie; Elsom, Stephen
2014-03-01
To evaluate the influence of a mental health training module on the therapeutic optimism of advanced nurse practitioner (ANP) students in primary care (family practice). Three cohorts of ANPs who undertook a Mental Health Problems in Primary Care Module as part of their MSc ANP (primary care) run by the University of Huddersfield completed the Elsom Therapeutic Optimism Scale (ETOS), in a pre- and postformat. The ETOS is a 10-item, self-administered scale, which has been used to evaluate therapeutic optimism previously in mental health professionals. All three cohorts who completed the scale showed an improvement in their therapeutic optimism scores. With stigma having such a detrimental effect for people diagnosed with a mental health problem, ANPs who are more mental health literate facilitated by education and training in turn facilitates them to have the skills and confidence to engage and inspire hope for the person diagnosed with mental health problems. ©2013 The Author(s) ©2013 American Association of Nurse Practitioners.
Chen, Ruifeng; Zhu, Lijun; Lv, Lihuo; Yao, Su; Li, Bin; Qian, Junqing
2017-06-01
Optimization of compatible solutes (ectoine) extraction and purification from Halomonas elongata cell fermentation had been investigated in the laboratory tests of a large scale commercial production project. After culturing H. elongata cells in developed medium at 28 °C for 23-30 h, we obtained an average yield and biomass of ectoine for 15.9 g/L and 92.9 (OD 600 ), respectively. Cell lysis was performed with acid treatment at moderate high temperature (60-70 °C). The downstream processing operations were designed to be as follows: filtration, desalination, cation exchange, extraction of crude product and three times of refining. Among which the cation exchange and extraction of crude product acquired a high average recovery rate of 95 and 96%; whereas a great loss rate of 19 and 15% was observed during the filtration and desalination, respectively. Combined with the recovering of ectoine from the mother liquor of the three times refining, the average of overall yield (referring to the amount of ectoine synthesized in cells) and purity of final product obtained were 43% and over 98%, respectively. However, key factors that affected the production efficiency were not yields but the time used in the extraction of crude product, involving the crystallization step from water, which spended 24-72 h according to the production scale. Although regarding to the productivity and simplicity on laboratory scale, the method described here can not compete with other investigations, in this study we acquired higher purity of ectoine and provided downstream processes that are capable of operating on industrial scale.
Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results
NASA Astrophysics Data System (ADS)
Lin, W.; Liu, Y.; Song, H.; Endo, S.
2011-12-01
Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.
Swamp rabbits as indicators of optimal scale for bottomland forest management
Joanne C. Crawford; Clayton K. Nielsen; Eric M. Schauber; John W. Groninger
2014-01-01
Specialist wildlife that evolved within forest ecosystems can be sensitive to disturbance regime changes and thereby serve as indicators of optimal scale for forest management. Bottomland hardwood (BLH) forests were once extensive within the Lower Mississippi Alluvial Valley, but land cover conversion has reduced BLH by about 80 percent over the last century. Since...
A Systematic Comparison between Classical Optimal Scaling and the Two-Parameter IRT Model
ERIC Educational Resources Information Center
Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J.
2007-01-01
In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…
ERIC Educational Resources Information Center
Brusco, Michael J.; Stahl, Stephanie
2005-01-01
There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with…
Corrections to scaling for watersheds, optimal path cracks, and bridge lines
NASA Astrophysics Data System (ADS)
Fehr, E.; Schrenk, K. J.; Araújo, N. A. M.; Kadau, D.; Grassberger, P.; Andrade, J. S., Jr.; Herrmann, H. J.
2012-07-01
We study the corrections to scaling for the mass of the watershed, the bridge line, and the optimal path crack in two and three dimensions (2D and 3D). We disclose that these models have numerically equivalent fractal dimensions and leading correction-to-scaling exponents. We conjecture all three models to possess the same fractal dimension, namely, df=1.2168±0.0005 in 2D and df=2.487±0.003 in 3D, and the same exponent of the leading correction, Ω=0.9±0.1 and Ω=1.0±0.1, respectively. The close relations between watersheds, optimal path cracks in the strong disorder limit, and bridge lines are further supported by either heuristic or exact arguments.
NASA Astrophysics Data System (ADS)
Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.
2017-12-01
StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an uninterrupted pipeline from toy/teaching codes to high-performance, extreme-scale solves. StagBLDemo replicates the functionality of an advanced MATLAB-style regional geodynamics code, thus providing users with a concrete procedure to exceed the performance and scalability limitations of smaller-scale tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu; Hohenstein, Edward G.
2014-05-14
We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral tensor, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.
Computational investigations of trans-platinum(II) oxime complexes used as anticancer drug
NASA Astrophysics Data System (ADS)
Sayin, Koray; Karakaş, Duran
2018-01-01
Some platinum oxime complexes are optimized at HF/CEP-31G level which has been reported as the best level for these type complexes in the gas phase. IR spectrum is calculated and the new scale factor is derived. NMR spectrum is calculated at the same level of theory and examined in detail. Quantum chemical parameters which have been mainly used are investigated and their formulas are given in detail. Additionally, selected quantum chemical parameters of studied complexes are calculated. New theoretical IC50% formulas are derived and biological activity rankings of mentioned complexes are investigated.
Nanometer-Scale Electrical Potential Profiling Across Perovskite Solar Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Chuanxiao; Jiang, Chun-Sheng; Ke, Weijun
2016-11-21
We used Kelvin probe force microscopy to study the potential distribution on cross-section of perovskite solar cells with different types of electron-transporting layers (ETLs). Our results explain the low open-circuit voltage and fill factor in ETL-free cells, and support the fact that intrinsic SnO2 as an alternative ETL material can make high-performance devices. Furthermore, the potential-profiling results indicate a reduction in junction-interface recombination by the optimized SnO2 layer and adding a fullerene layer, which is consistent with the improved device performance and current-voltage hysteresis.
Kumar, Mukesh; Singh, Amrinder; Beniwal, Vikas; Salar, Raj Kumar
2016-12-01
Tannase (tannin acyl hydrolase E.C 3.1.1.20) is an inducible, largely extracellular enzyme that causes the hydrolysis of ester and depside bonds present in various substrates. Large scale industrial application of this enzyme is very limited owing to its high production costs. In the present study, cost effective production of tannase by Klebsiella pneumoniae KP715242 was studied under submerged fermentation using different tannin rich agro-residues like Indian gooseberry leaves (Phyllanthus emblica), Black plum leaves (Syzygium cumini), Eucalyptus leaves (Eucalyptus glogus) and Babul leaves (Acacia nilotica). Among all agro-residues, Indian gooseberry leaves were found to be the best substrate for tannase production under submerged fermentation. Sequential optimization approach using Taguchi orthogonal array screening and response surface methodology was adopted to optimize the fermentation variables in order to enhance the enzyme production. Eleven medium components were screened primarily by Taguchi orthogonal array design to identify the most contributing factors towards the enzyme production. The four most significant contributing variables affecting tannase production were found to be pH (23.62 %), tannin extract (20.70 %), temperature (20.33 %) and incubation time (14.99 %). These factors were further optimized with central composite design using response surface methodology. Maximum tannase production was observed at 5.52 pH, 39.72 °C temperature, 91.82 h of incubation time and 2.17 % tannin content. The enzyme activity was enhanced by 1.26 fold under these optimized conditions. The present study emphasizes the use of agro-residues as a potential substrate with an aim to lower down the input costs for tannase production so that the enzyme could be used proficiently for commercial purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Andrew F.; Wetzstein, M.; Naab, T.
2009-10-01
We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the codemore » necessary to obtain forces efficiently from special purpose 'GRAPE' hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE. Finally, we find that although parallel performance on small problems may reach a plateau beyond which more processors bring no additional speedup, performance never decreases, a factor important for running large simulations on many processors with individual time steps, where only a small fraction of the total particles require updates at any given moment.« less
Ahmed, A. Bakrudeen Ali; Rao, A. S.; Rao, M. V.; Taha, Rosna Mat
2012-01-01
Gymnema sylvestre (R.Br.) is an important diabetic medicinal plant which yields pharmaceutically active compounds called gymnemic acid (GA). The present study describes callus induction and the subsequent batch culture optimization and GA quantification determined by linearity, precision, accuracy, and recovery. Best callus induction of GA was noticed in MS medium combined with 2,4-D (1.5 mg/L) and KN (0.5 mg/L). Evaluation and isolation of GA from the calluses derived from different plant parts, namely, leaf, stem and petioles have been done in the present case for the first time. Factors such as light, temperature, sucrose, and photoperiod were studied to observe their effect on GA production. Temperature conditions completely inhibited GA production. Out of the different sucrose concentrations tested, the highest yield (35.4 mg/g d.w) was found at 5% sucrose followed by 12 h photoperiod (26.86 mg/g d.w). Maximum GA production (58.28 mg/g d.w) was observed in blue light. The results showed that physical and chemical factors greatly influence the production of GA in callus cultures of G. sylvestre. The factors optimized for in vitro production of GA during the present study can successfully be employed for their large-scale production in bioreactors. PMID:22629221
2014-01-01
Background In Pichia pastoris bioprocess engineering, classic approaches for clone selection and bioprocess optimization at small/micro scale using the promoter of the alcohol oxidase 1 gene (PAOX1), induced by methanol, present low reproducibility leading to high time and resource consumption. Results An automated microfermentation platform (RoboLector) was successfully tested to overcome the chronic problems of clone selection and optimization of fed-batch strategies. Different clones from Mut+P. pastoris phenotype strains expressing heterologous Rhizopus oryzae lipase (ROL), including a subset also overexpressing the transcription factor HAC1, were tested to select the most promising clones. The RoboLector showed high performance for the selection and optimization of cultivation media with minimal cost and time. Syn6 medium was better than conventional YNB medium in terms of production of heterologous protein. The RoboLector microbioreactor was also tested for different fed-batch strategies with three clones producing different lipase levels. Two mixed substrates fed-batch strategies were evaluated. The first strategy was the enzymatic release of glucose from a soluble glucose polymer by a glucosidase, and methanol addition every 24 hours. The second strategy used glycerol as co-substrate jointly with methanol at two different feeding rates. The implementation of these simple fed-batch strategies increased the levels of lipolytic activity 80-fold compared to classical batch strategies used in clone selection. Thus, these strategies minimize the risk of errors in the clone selection and increase the detection level of the desired product. Finally, the performance of two fed-batch strategies was compared for lipase production between the RoboLector microbioreactor and 5 liter stirred tank bioreactor for three selected clones. In both scales, the same clone ranking was achieved. Conclusion The RoboLector showed excellent performance in clone selection of P. pastoris Mut+ phenotype. The use of fed-batch strategies using mixed substrate feeds resulted in increased biomass and lipolytic activity. The automated processing of fed-batch strategies by the RoboLector considerably facilitates the operation of fermentation processes, while reducing error-prone clone selection by increasing product titers. The scale-up from microbioreactor to lab scale stirred tank bioreactor showed an excellent correlation, validating the use of microbioreactor as a powerful tool for evaluating fed-batch operational strategies. PMID:24606982
Monzani, Dario; Steca, Patrizia; Greco, Andrea
2014-02-01
Dispositional optimism is an individual difference promoting psychosocial adjustment and well-being during adolescence. Dispositional optimism was originally defined as a one-dimensional construct; however, empirical evidence suggests two correlated factors in the Life Orientation Test - Revised (LOT-R). The main aim of the study was to evaluate the dimensionality of the LOT-R. This study is the first attempt to identify the best factor structure, comparing congeneric, two correlated-factor, and two orthogonal-factor models in a sample of adolescents. Concurrent validity was also assessed. The results demonstrated the superior fit of the two orthogonal-factor model thus reconciling the one-dimensional definition of dispositional optimism with the bi-dimensionality of the LOT-R. Moreover, the results of correlational analyses proved the concurrent validity of this self-report measure: optimism is moderately related to indices of psychosocial adjustment and well-being. Thus, the LOT-R is a useful, valid, and reliable self-report measure to properly assess optimism in adolescence. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Psychometric properties of the medical outcomes study sleep scale in Spanish postmenopausal women.
Zagalaz-Anula, Noelia; Hita-Contreras, Fidel; Martínez-Amat, Antonio; Cruz-Díaz, David; Lomas-Vega, Rafael
2017-07-01
This study aimed to analyze the reliability and validity of the Spanish version of the Medical Outcomes Study Sleep Scale (MOS-SS), and its ability to discriminate between poor and good sleepers among a Spanish population with vestibular disorders. In all, 121 women (50-76 years old) completed the Spanish version of the MOS-SS. Internal consistency, test-retest reliability, and construct validity (exploratory factor analysis) were analyzed. Concurrent validity was evaluated using the Pittsburgh Sleep Quality Index and the 36-item Short Form Health Survey. To analyze the ability of the MOS-SS scores to discriminate between poor and good sleepers, a receiver-operating characteristic curve analysis was performed. The Spanish version of the MOS-SS showed excellent and substantial reliability in Sleep Problems Index I (two sleep disturbance items, one somnolence item, two sleep adequacy items, and awaken short of breath or with headache) and Sleep Problems Index II (four sleep disturbance items, two somnolence items, two sleep adequacy items, and awaken short of breath or with headache), respectively, and good internal consistency with optimal Cronbach's alpha values in all domains and indexes (0.70-0.90). Factor analysis suggested a coherent four-factor structure (explained variance 70%). In concurrent validity analysis, MOS-SS indexes showed significant and strong correlation with the Pittsburgh Sleep Quality Index total score, and moderate with the 36-item Short Form Health Survey component summaries. Several domains and the two indexes were significantly able to discriminate between poor and good sleepers (P < 0.05). Optimal cut-off points were above 20 for "sleep disturbance" domain, with above 22.22 and above 33.33 for Sleep Problems Index I and II. The Spanish version of the MOS-SS is a valid and reliable instrument, suitable to assess sleep quality in Spanish postmenopausal women, with satisfactory general psychometric properties. It discriminates well between good and poor sleepers.
Large Scale Bacterial Colony Screening of Diversified FRET Biosensors
Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver
2015-01-01
Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...
2017-05-18
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
Seasonal-Scale Optimization of Conventional Hydropower Operations in the Upper Colorado System
NASA Astrophysics Data System (ADS)
Bier, A.; Villa, D.; Sun, A.; Lowry, T. S.; Barco, J.
2011-12-01
Sandia National Laboratories is developing the Hydropower Seasonal Concurrent Optimization for Power and the Environment (Hydro-SCOPE) tool to examine basin-wide conventional hydropower operations at seasonal time scales. This tool is part of an integrated, multi-laboratory project designed to explore different aspects of optimizing conventional hydropower operations. The Hydro-SCOPE tool couples a one-dimensional reservoir model with a river routing model to simulate hydrology and water quality. An optimization engine wraps around this model framework to solve for long-term operational strategies that best meet the specific objectives of the hydrologic system while honoring operational and environmental constraints. The optimization routines are provided by Sandia's open source DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) software. Hydro-SCOPE allows for multi-objective optimization, which can be used to gain insight into the trade-offs that must be made between objectives. The Hydro-SCOPE tool is being applied to the Upper Colorado Basin hydrologic system. This system contains six reservoirs, each with its own set of objectives (such as maximizing revenue, optimizing environmental indicators, meeting water use needs, or other objectives) and constraints. This leads to a large optimization problem with strong connectedness between objectives. The systems-level approach used by the Hydro-SCOPE tool allows simultaneous analysis of these objectives, as well as understanding of potential trade-offs related to different objectives and operating strategies. The seasonal-scale tool will be tightly integrated with the other components of this project, which examine day-ahead and real-time planning, environmental performance, hydrologic forecasting, and plant efficiency.
Personality and psychosocial function after brain injury.
Malia, K; Powell, G; Torode, S
1995-10-01
A total of 74 brain-injured patients and 46 non-neurologically matched controls consecutively admitted to a specialist medical rehabilitation unit, were administered the 'Headley Court psychosocial rating scale' and four questionnaires examining personality traits of 'locus of control', 'use of humour', 'optimism' and 'easy-going disposition'. Both pre- and post-injury personality ratings were obtained. The relatives of all participants were sent the same scales. Personality changes are reported in each of the four areas; however, time post-injury appears to be a significant factor in the type of change reported; in this cross-sectional study, at 6 and 12 months post-injury, changes are noted in all variables except locus of control, whereas at 18 months post-injury only 'easy-going disposition' showed significant change, at 24 months post-injury changes were noted in all variables except optimism, and at 30 months post-injury no changes were noted. In the present study, examining a period of 2.5 years post-injury, the personality changes remain static once they have occurred. Despite widespread reports in the literature on the importance of pre- and post-trauma personality to good psychosocial functioning, the present study found that it was only an 'easy-going disposition' post-trauma that was consistently related to good psychosocial functioning. Reasons for this are discussed.
NASA Astrophysics Data System (ADS)
Premkumar, S.; Jawahar, A.; Mathavan, T.; Kumara Dhas, M.; Sathe, V. G.; Milton Franklin Benial, A.
2014-08-01
The molecular structure of 2-(tert-butoxycarbonyl (Boc) -amino)-5-bromopyridine (BABP) was optimized by the DFT/B3LYP method with 6-311G (d,p), 6-311++G (d,p) and cc-pVTZ basis sets using the Gaussian 09 program. The most stable optimized structure of the molecule was predicted by the DFT/B3LYP method with cc-pVTZ basis set. The vibrational frequencies, Mulliken atomic charge distribution, frontier molecular orbitals and thermodynamical parameters were calculated. These calculations were done at the ground state energy level of BABP without applying any constraint on the potential energy surface. The vibrational spectra were experimentally recorded using Fourier Transform-Infrared (FT-IR) and micro-Raman spectrometer. The computed vibrational frequencies were scaled by scale factors to yield a good agreement with observed experimental vibrational frequencies. The complete theoretically calculated and experimentally observed vibrational frequencies were assigned on the basis of Potential Energy Distribution (PED) calculation using the VEDA 4.0 program. The vibrational modes assignments were performed by using the animation option of GaussView 05 graphical interface for Gaussian program. The Mulliken atomic charge distribution was calculated for BABP molecule. The molecular reactivity and stability of BABP were also studied by frontier molecular orbitals (FMOs) analysis.
Prevalence scaling: applications to an intelligent workstation for the diagnosis of breast cancer.
Horsch, Karla; Giger, Maryellen L; Metz, Charles E
2008-11-01
Our goal was to investigate the effects of changes that the prevalence of cancer in a population have on the probability of malignancy (PM) output and an optimal combination of a true-positive fraction (TPF) and a false-positive fraction (FPF) of a mammographic and sonographic automatic classifier for the diagnosis of breast cancer. We investigate how a prevalence-scaling transformation that is used to change the prevalence inherent in the computer estimates of the PM affects the numerical and histographic output of a previously developed multimodality intelligent workstation. Using Bayes' rule and the binormal model, we study how changes in the prevalence of cancer in the diagnostic breast population affect our computer classifiers' optimal operating points, as defined by maximizing the expected utility. Prevalence scaling affects the threshold at which a particular TPF and FPF pair is achieved. Tables giving the thresholds on the scaled PM estimates that result in particular pairs of TPF and FPF are presented. Histograms of PMs scaled to reflect clinically relevant prevalence values differ greatly from histograms of laboratory-designed PMs. The optimal pair (TPF, FPF) of our lower performing mammographic classifier is more sensitive to changes in clinical prevalence than that of our higher performing sonographic classifier. Prevalence scaling can be used to change computer PM output to reflect clinically more appropriate prevalence. Relatively small changes in clinical prevalence can have large effects on the computer classifier's optimal operating point.
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
Lopez-Fernandez, Olatz; Kuss, Daria J; Pontes, Halley M; Griffiths, Mark D; Dawes, Christopher; Justice, Lucy V; Männikkö, Niko; Kääriäinen, Maria; Rumpf, Hans-Jürgen; Bischof, Anja; Gässler, Ann-Kathrin; Romo, Lucia; Kern, Laurence; Morvan, Yannick; Rousseau, Amélie; Graziani, Pierluigi; Demetrovics, Zsolt; Király, Orsolya; Schimmenti, Adriano; Passanisi, Alessia; Lelonek-Kuleta, Bernadeta; Chwaszcz, Joanna; Chóliz, Mariano; Zacarés, Juan José; Serra, Emilia; Dufour, Magali; Rochat, Lucien; Zullino, Daniele; Achab, Sophia; Landrø, Nils Inge; Suryani, Eva; Hormes, Julia M; Terashima, Javier Ponce; Billieux, Joël
2018-06-08
The prevalence of mobile phone use across the world has increased greatly over the past two decades. Problematic Mobile Phone Use (PMPU) has been studied in relation to public health and comprises various behaviours, including dangerous, prohibited, and dependent use. These types of problematic mobile phone behaviours are typically assessed with the short version of the Problematic Mobile Phone Use Questionnaire (PMPUQ⁻SV). However, to date, no study has ever examined the degree to which the PMPU scale assesses the same construct across different languages. The aims of the present study were to (i) determine an optimal factor structure for the PMPUQ⁻SV among university populations using eight versions of the scale (i.e., French, German, Hungarian, English, Finnish, Italian, Polish, and Spanish); and (ii) simultaneously examine the measurement invariance (MI) of the PMPUQ⁻SV across all languages. The whole study sample comprised 3038 participants. Descriptive statistics, correlations, and Cronbach's alpha coefficients were extracted from the demographic and PMPUQ-SV items. Individual and multigroup confirmatory factor analyses alongside MI analyses were conducted. Results showed a similar pattern of PMPU across the translated scales. A three-factor model of the PMPUQ-SV fitted the data well and presented with good psychometric properties. Six languages were validated independently, and five were compared via measurement invariance for future cross-cultural comparisons. The present paper contributes to the assessment of problematic mobile phone use because it is the first study to provide a cross-cultural psychometric analysis of the PMPUQ-SV.
Sato, Katsufumi; Shiomi, Kozue; Watanabe, Yuuki; Watanuki, Yutaka; Takahashi, Akinori; Ponganis, Paul J.
2010-01-01
It has been predicted that geometrically similar animals would swim at the same speed with stroke frequency scaling with mass−1/3. In the present study, morphological and behavioural data obtained from free-ranging penguins (seven species) were compared. Morphological measurements support the geometrical similarity. However, cruising speeds of 1.8–2.3 m s−1 were significantly related to mass0.08 and stroke frequencies were proportional to mass−0.29. These scaling relationships do not agree with the previous predictions for geometrically similar animals. We propose a theoretical model, considering metabolic cost, work against mechanical forces (drag and buoyancy), pitch angle and dive depth. This new model predicts that: (i) the optimal swim speed, which minimizes the energy cost of transport, is proportional to (basal metabolic rate/drag)1/3 independent of buoyancy, pitch angle and dive depth; (ii) the optimal speed is related to mass0.05; and (iii) stroke frequency is proportional to mass−0.28. The observed scaling relationships of penguins support these predictions, which suggest that breath-hold divers swam optimally to minimize the cost of transport, including mechanical and metabolic energy during dive. PMID:19906666
Application of optimized multiscale mathematical morphology for bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao
2017-04-01
In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.
Scaling range sizes to threats for robust predictions of risks to biodiversity.
Keith, David A; Akçakaya, H Resit; Murray, Nicholas J
2018-04-01
Assessments of risk to biodiversity often rely on spatial distributions of species and ecosystems. Range-size metrics used extensively in these assessments, such as area of occupancy (AOO), are sensitive to measurement scale, prompting proposals to measure them at finer scales or at different scales based on the shape of the distribution or ecological characteristics of the biota. Despite its dominant role in red-list assessments for decades, appropriate spatial scales of AOO for predicting risks of species' extinction or ecosystem collapse remain untested and contentious. There are no quantitative evaluations of the scale-sensitivity of AOO as a predictor of risks, the relationship between optimal AOO scale and threat scale, or the effect of grid uncertainty. We used stochastic simulation models to explore risks to ecosystems and species with clustered, dispersed, and linear distribution patterns subject to regimes of threat events with different frequency and spatial extent. Area of occupancy was an accurate predictor of risk (0.81<|r|<0.98) and performed optimally when measured with grid cells 0.1-1.0 times the largest plausible area threatened by an event. Contrary to previous assertions, estimates of AOO at these relatively coarse scales were better predictors of risk than finer-scale estimates of AOO (e.g., when measurement cells are <1% of the area of the largest threat). The optimal scale depended on the spatial scales of threats more than the shape or size of biotic distributions. Although we found appreciable potential for grid-measurement errors, current IUCN guidelines for estimating AOO neutralize geometric uncertainty and incorporate effective scaling procedures for assessing risks posed by landscape-scale threats to species and ecosystems. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Bae, Seong Hwan
2014-01-01
Analysis of scars in various conditions is essential, but no consensus had been reached on the scar assessment scale to select for a given condition. We reviewed papers to determine the scar assessment scale selected depending on the scar condition and treatment method. We searched PubMed for articles published since 2000 with the contents of the scar evaluation using a scar assessment scale with a Journal Citation Report impact factor >0.5. Among them, 96 articles that conducted a scar evaluation using a scar assessment scale were reviewed and analyzed. The scar assessment scales were identified and organized by various criteria. Among the types of scar assessment scales, the Patient and Observer Scar Assessment Scale (POSAS) was found to be the most frequently used scale. As for the assessment of newly developed operative scars, the POSAS was most used. Meanwhile, for categories depending on the treatment methods for preexisting scars, the Vancouver Scar Scale (VSS) was used in 6 studies following a laser treatment, the POSAS was used in 7 studies following surgical treatment, and the POSAS was used in 7 studies following a conservative treatment. Within the 12 categories of scar status, the VSS showed the highest frequency in 6 categories and the POSAS showed the highest frequency in the other 6 categories. According to our reviews, the POSAS and VSS are the most frequently used scar assessment scales. In the future, an optimal, universal scar scoring system is needed in order to better evaluate and treat pathologic scarring. PMID:24665417
ERIC Educational Resources Information Center
Hirsch, Jameson K.; Conner, Kenneth R.
2006-01-01
To test the hypothesis that higher levels of optimism reduce the association between hopelessness and suicidal ideation, 284 college students completed self-report measures of optimism and Beck scales for hopelessness, suicidal ideation, and depression. A statistically significant interaction between hopelessness and one measure of optimism was…
Factor Analysis of the Community Balance and Mobility Scale in Individuals with Knee Osteoarthritis.
Takacs, Judit; Krowchuk, Natasha M; Goldsmith, Charles H; Hunt, Michael A
2017-10-01
The clinical assessment of balance is an important first step in characterizing the risk of falls. The Community Balance and Mobility Scale (CB&M) is a test of balance and mobility that was designed to assess performance on advanced tasks necessary for independence in the community. However, other factors that can affect balancing ability may also be present during performance of the real-world tasks on the CB&M. It is important for clinicians to understand fully what other modifiable factors the CB&M may encompass. The purpose of this study was to evaluate the underlying constructs in the CB&M in individuals with knee osteoarthritis (OA). This was an observational study, with a single testing session. Participants with knee OA aged 50 years and older completed the CB&M, a clinical test of balance and mobility. Confirmatory factor analysis was then used to examine whether the tasks on the CB&M measure distinct factors. Three a priori theory-driven models with three (strength, balance, mobility), four (range of motion added) and six (pain and fear added) constructs were evaluated using multiple fit indices. A total of 131 participants (mean [SD] age 66.3 [8.5] years, BMI 27.3 [5.2] kg m -2 ) participated. A three-factor model in which all tasks loaded on these three factors explained 65% of the variance and yielded the most optimal model, as determined using scree plots, chi-squared values and explained variance. The first factor accounted for 49% of the variance and was interpreted as lower limb muscle strength. The second and third factors were interpreted as mobility and balance, respectively. The CB&M demonstrated the measurement of three distinct factors, interpreted as lower limb strength, balance and mobility, supporting the use of the CB&M with people with knee OA for evaluation of these important factors in falls risk and functional mobility. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Optimization and large scale computation of an entropy-based moment closure
NASA Astrophysics Data System (ADS)
Kristopher Garrett, C.; Hauck, Cory; Hill, Judith
2015-12-01
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Value, Challenges, and Satisfaction of Certification for Multiple Sclerosis Specialists
Halper, June
2014-01-01
Background: Specialist certification among interdisciplinary multiple sclerosis (MS) team members provides formal recognition of a specialized body of knowledge felt to be necessary to provide optimal care to individuals and families living with MS. Multiple sclerosis specialist certification (MS Certified Specialist, or MSCS) first became available in 2004 for MS interdisciplinary team members, but prior to the present study had not been evaluated for its perceived value, challenges, and satisfaction. Methods: A sample consisting of 67 currently certified MS specialists and 20 lapsed-certification MS specialists completed the following instruments: Perceived Value of Certification Tool (PVCT), Perceived Challenges and Barriers to Certification Scale (PCBCS), Overall Satisfaction with Certification Scale, and a demographic data form. Results: Satisfactory reliability was shown for the total scale and four factored subscales of the PVCT and for two of the three factored PCBCS subscales. Currently certified MS specialists perceived significantly greater value and satisfaction than lapsed-certification MS specialists in terms of employer and peer recognition, validation of MS knowledge, and empowering MS patients. Lapsed-certification MS specialists reported increased confidence and caring for MS patients using evidence-based practice. Both currently certified and lapsed-certification groups reported dissatisfaction with MSCS recognition and pay/salary rewards. Conclusions: The results of this study can be used in efforts to encourage initial certification and recertification of interdisciplinary MS team members. PMID:25061432
Konstabel, Kenn; Lönnqvist, Jan-Erik; Leikas, Sointu; García Velázquez, Regina; Qin, Hiaying; Verkasalo, Markku; Walkowitz, Gari
2017-01-01
The aim of this study was to construct a short, 30-item personality questionnaire that would be, in terms of content and meaning of the scores, as comparable as possible with longer, well-established inventories such as NEO PI-R and its clones. To do this, we shortened the formerly constructed 60-item “Short Five” (S5) by half so that each subscale would be represented by a single item. We compared all possibilities of selecting 30 items (preserving balanced keying within each domain of the five-factor model) in terms of correlations with well-established scales, self-peer correlations, and clarity of meaning, and selected an optimal combination for each domain. The resulting shortened questionnaire, XS5, was compared to the original S5 using data from student samples in 6 different countries (Estonia, Finland, UK, Germany, Spain, and China), and a representative Finnish sample. The correlations between XS5 domain scales and their longer counterparts from well-established scales ranged from 0.74 to 0.84; the difference from the equivalent correlations for full version of S5 or from meta-analytic short-term dependability coefficients of NEO PI-R was not large. In terms of prediction of external criteria (emotional experience and self-reported behaviours), there were no important differences between XS5, S5, and the longer well-established scales. Controlling for acquiescence did not improve the prediction of criteria, self-peer correlations, or correlations with longer scales, but it did improve internal reliability and, in some analyses, comparability of the principal component structure. XS5 can be recommended as an economic measure of the five-factor model of personality at the level of domain scales; it has reasonable psychometric properties, fair correlations with longer well-established scales, and it can predict emotional experience and self-reported behaviours no worse than S5. When subscales are essential, we would still recommend using the full version of S5. PMID:28800630
Tran, Thach Duc; Tran, Tuan; Fisher, Jane
2013-01-12
Depression and anxiety are recognised increasingly as serious public health problems among women in low- and lower-middle income countries. The aim of this study was to validate the 21-item Depression Anxiety and Stress Scale (DASS21) for use in screening for these common mental disorders among rural women with young children in the North of Vietnam. The DASS-21 was translated from English to Vietnamese, culturally verified, back-translated and administered to women who also completed, separately, a psychiatrist-administered Structured Clinical Interview for DSM IV Axis 1 diagnoses of depressive and anxiety disorders. The sample was a community-based representative cohort of adult women with young children living in Ha Nam Province in northern Viet Nam. Cronbach's alpha, Exploratory Factor Analyses (EFA) and Receiver Operating Characteristic (ROC) analyses were performed to identify the psychometric properties of the Depression, Anxiety, and Stress subscales and the overall scale. Complete data were available for 221 women. The internal consistency (Cronbach's alpha) of each sub-scale and the overall scale were high, ranging from 0.70 for the Stress subscale to 0.88 for the overall scale, but EFA indicated that the 21 items all loaded on one factor. Scores on each of the three sub-scales, and the combinations of two or three of them were able to detect the common mental disorders of depression and anxiety in women with a sensitivity of 79.1% and a specificity of 77.0% at the optimal cut off of >33. However, they did not distinguish between those experiencing only depression or only anxiety. The total score of the 21 items of the DASS21-Vietnamese validation appears to be comprehensible and sensitive to detecting common mental disorders in women with young children in primary health care in rural northern Vietnam and therefore might also be useful to screen for these conditions in other resource-constrained settings.
Konstabel, Kenn; Lönnqvist, Jan-Erik; Leikas, Sointu; García Velázquez, Regina; Qin, Hiaying; Verkasalo, Markku; Walkowitz, Gari
2017-01-01
The aim of this study was to construct a short, 30-item personality questionnaire that would be, in terms of content and meaning of the scores, as comparable as possible with longer, well-established inventories such as NEO PI-R and its clones. To do this, we shortened the formerly constructed 60-item "Short Five" (S5) by half so that each subscale would be represented by a single item. We compared all possibilities of selecting 30 items (preserving balanced keying within each domain of the five-factor model) in terms of correlations with well-established scales, self-peer correlations, and clarity of meaning, and selected an optimal combination for each domain. The resulting shortened questionnaire, XS5, was compared to the original S5 using data from student samples in 6 different countries (Estonia, Finland, UK, Germany, Spain, and China), and a representative Finnish sample. The correlations between XS5 domain scales and their longer counterparts from well-established scales ranged from 0.74 to 0.84; the difference from the equivalent correlations for full version of S5 or from meta-analytic short-term dependability coefficients of NEO PI-R was not large. In terms of prediction of external criteria (emotional experience and self-reported behaviours), there were no important differences between XS5, S5, and the longer well-established scales. Controlling for acquiescence did not improve the prediction of criteria, self-peer correlations, or correlations with longer scales, but it did improve internal reliability and, in some analyses, comparability of the principal component structure. XS5 can be recommended as an economic measure of the five-factor model of personality at the level of domain scales; it has reasonable psychometric properties, fair correlations with longer well-established scales, and it can predict emotional experience and self-reported behaviours no worse than S5. When subscales are essential, we would still recommend using the full version of S5.
Marfeo, Elizabeth E; Ni, Pengsheng; Chan, Leighton; Rasch, Elizabeth K; Jette, Alan M
2014-07-01
The goal of this article was to investigate optimal functioning of using frequency vs. agreement rating scales in two subdomains of the newly developed Work Disability Functional Assessment Battery: the Mood & Emotions and Behavioral Control scales. A psychometric study comparing rating scale performance embedded in a cross-sectional survey used for developing a new instrument to measure behavioral health functioning among adults applying for disability benefits in the United States was performed. Within the sample of 1,017 respondents, the range of response category endorsement was similar for both frequency and agreement item types for both scales. There were fewer missing values in the frequency items than the agreement items. Both frequency and agreement items showed acceptable reliability. The frequency items demonstrated optimal effectiveness around the mean ± 1-2 standard deviation score range; the agreement items performed better at the extreme score ranges. Findings suggest an optimal response format requires a mix of both agreement-based and frequency-based items. Frequency items perform better in the normal range of responses, capturing specific behaviors, reactions, or situations that may elicit a specific response. Agreement items do better for those whose scores are more extreme and capture subjective content related to general attitudes, behaviors, or feelings of work-related behavioral health functioning. Copyright © 2014 Elsevier Inc. All rights reserved.
Li, Zheng; Zhou, Tao; Zhao, Xiang; Huang, Kaicheng; Gao, Shan; Wu, Hao; Luo, Hui
2015-07-08
Drought is expected to increase in frequency and severity due to global warming, and its impacts on vegetation are typically extensively evaluated with climatic drought indices, such as multi-scalar Standardized Precipitation Evapotranspiration Index (SPEI). We analyzed the covariation between the SPEIs of various time scales and the anomalies of the normalized difference vegetation index (NDVI), from which the vegetation type-related optimal time scales were retrieved. The results indicated that the optimal time scales of needle-leaved forest, broadleaf forest and shrubland were between 10 and 12 months, which were considerably longer than the grassland, meadow and cultivated vegetation ones (2 to 4 months). When the optimal vegetation type-related time scales were used, the SPEI could better reflect the vegetation's responses to water conditions, with the correlation coefficients between SPEIs and NDVI anomalies increased by 5.88% to 28.4%. We investigated the spatio-temporal characteristics of drought and quantified the different responses of vegetation growth to drought during the growing season (April-October). The results revealed that the frequency of drought has increased in the 21st century with the drying trend occurring in most of China. These results are useful for ecological assessments and adapting management steps to mitigate the impact of drought on vegetation. They are helpful to employ water resources more efficiently and reduce potential damage to human health caused by water shortages.
van der Naalt, Joukje; Timmerman, Marieke E; de Koning, Myrthe E; van der Horn, Harm J; Scheenen, Myrthe E; Jacobs, Bram; Hageman, Gerard; Yilmaz, Tansel; Roks, Gerwin; Spikman, Jacoba M
2017-07-01
Mild traumatic brain injury (mTBI) accounts for most cases of TBI, and many patients show incomplete long-term functional recovery. We aimed to create a prognostic model for functional outcome by combining demographics, injury severity, and psychological factors to identify patients at risk for incomplete recovery at 6 months. In particular, we investigated additional indicators of emotional distress and coping style at 2 weeks above early predictors measured at the emergency department. The UPFRONT study was an observational cohort study done at the emergency departments of three level-1 trauma centres in the Netherlands, which included patients with mTBI, defined by a Glasgow Coma Scale score of 13-15 and either post-traumatic amnesia lasting less than 24 h or loss of consciousness for less than 30 min. Emergency department predictors were measured either on admission with mTBI-comprising injury severity (GCS score, post-traumatic amnesia, and CT abnormalities), demographics (age, gender, educational level, pre-injury mental health, and previous brain injury), and physical conditions (alcohol use on the day of injury, neck pain, headache, nausea, dizziness)-or at 2 weeks, when we obtained data on mood (Hospital Anxiety and Depression Scale), emotional distress (Impact of Event Scale), coping (Utrecht Coping List), and post-traumatic complaints. The functional outcome was recovery, assessed at 6 months after injury with the Glasgow Outcome Scale Extended (GOSE). We dichotomised recovery into complete (GOSE=8) and incomplete (GOSE≤7) recovery. We used logistic regression analyses to assess the predictive value of patient information collected at the time of admission to an emergency department (eg, demographics, injury severity) alone, and combined with predictors of outcome collected at 2 weeks after injury (eg, emotional distress and coping). Between Jan 25, 2013, and Jan 6, 2015, data from 910 patients with mTBI were collected 2 weeks after injury; the final date for 6-month follow-up was July 6, 2015. Of these patients, 764 (84%) had post-traumatic complaints and 414 (45%) showed emotional distress. At 6 months after injury, outcome data were available for 671 patients; complete recovery (GOSE=8) was observed in 373 (56%) patients and incomplete recovery (GOSE ≤7) in 298 (44%) patients. Logistic regression analyses identified several predictors for 6-month outcome, including education and age, with a clear surplus value of indicators of emotional distress and coping obtained at 2 weeks (area under the curve [AUC]=0·79, optimism 0·02; Nagelkerke R 2 =0·32, optimism 0·05) than only emergency department predictors at the time of admission (AUC=0·72, optimism 0·03; Nagelkerke R 2 =0·19, optimism 0·05). Psychological factors (ie, emotional distress and maladaptive coping experienced early after injury) in combination with pre-injury mental health problems, education, and age are important predictors for recovery at 6 months following mTBI. These findings provide targets for early interventions to improve outcome in a subgroup of patients at risk of incomplete recovery from mTBI, and warrant validation. Dutch Brain Foundation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Impacts of Greening Materials and Seed Pretreatment on Vegetation Development at an initial stage
NASA Astrophysics Data System (ADS)
Obriejetan, Michael
2015-04-01
Slope protection using greening measures as an integral part of soil-bioengineering is characterized by an increasing demand in research and practice. However, successful greening is a very complex issue due to the vast variety in specific slope characteristics such as morphology, soil properties and environmental factors. Because of practical experience in the greening of slopes and the results of further investigations in small-scale tests, it can be stated that the use of appropriate planting techniques, the quality of the materials used and the proper implementation of potential needed auxiliary materials at difficult locations are seen as key success criteria for sustainable vegetation development. Within this framework small-scale testing series were conducted regarding the influence of specific soil-properties, the use of auxiliary greening materials (fertilizer, mycorrhiza fungi, Bonded fiber matrix (BFM)…), application of different seed-pretreatment methods and influences of specific environmental factors (inclination, seeding depth) on vegetational development in an early phase. The aim of the series is to quantitatively and thus economically optimize the use of different greening-components and seed mixtures for practical application, while ensuring optimal development of vegetation. To quantify the influence of the treatment systems, vegetation cover ratio, biomass production (aboveground and belowground) and the germination of plant seeds served as main criteria for assessing the development in an initial stage. Selected findings for instance show that the admixture of mycorrhiza fungi can increase the cover ratio up to 23 % compared to untreated plots. In addition, pretreatment of seeds showed distinct effects too by shortening germination phase and increasing the capability of producing a higher amount of healthy sprouts. From a bioengineering perspective the results will serve as potential decisive advantage for successful implementation of greening measures.
Marcano, Mariano; Layton, Anita T; Layton, Harold E
2010-02-01
In a mathematical model of the urine concentrating mechanism of the inner medulla of the rat kidney, a nonlinear optimization technique was used to estimate parameter sets that maximize the urine-to-plasma osmolality ratio (U/P) while maintaining the urine flow rate within a plausible physiologic range. The model, which used a central core formulation, represented loops of Henle turning at all levels of the inner medulla and a composite collecting duct (CD). The parameters varied were: water flow and urea concentration in tubular fluid entering the descending thin limbs and the composite CD at the outer-inner medullary boundary; scaling factors for the number of loops of Henle and CDs as a function of medullary depth; location and increase rate of the urea permeability profile along the CD; and a scaling factor for the maximum rate of NaCl transport from the CD. The optimization algorithm sought to maximize a quantity E that equaled U/P minus a penalty function for insufficient urine flow. Maxima of E were sought by changing parameter values in the direction in parameter space in which E increased. The algorithm attained a maximum E that increased urine osmolality and inner medullary concentrating capability by 37.5% and 80.2%, respectively, above base-case values; the corresponding urine flow rate and the concentrations of NaCl and urea were all within or near reported experimental ranges. Our results predict that urine osmolality is particularly sensitive to three parameters: the urea concentration in tubular fluid entering the CD at the outer-inner medullary boundary, the location and increase rate of the urea permeability profile along the CD, and the rate of decrease of the CD population (and thus of CD surface area) along the cortico-medullary axis.
Hu, Suwen; Deng, Lei; Wang, Huamao; Zhuang, Yingping; Chu, Ju; Zhang, Siliang; Li, Zhonghai; Guo, Meijin
2011-05-01
The mouse-human chimeric anti-epidermal growth factor receptor vIII (EGFRvIII) antibody C12 is a promising candidate for the diagnosis of hepatocellular carcinoma (HCC). In this study, 3 processes were successfully developed to produce C12 by cultivation of recombinant Chinese hamster ovary (CHO-DG44) cells in serum-free medium. The effect of inoculum density was evaluated in batch cultures of shaker flasks to obtain the optimal inoculum density of 5 × 10(5) cells/mL. Then, the basic metabolic characteristics of CHO-C12 cells were studied in stirred bioreactor batch cultures. The results showed that the limiting concentrations of glucose and glutamine were 6 and 1 mM, respectively. The culture process consumed significant amounts of aspartate, glutamate, asparagine, serine, isoleucine, leucine, and lysine. Aspartate, glutamate, asparagine, and serine were particularly exhausted in the early growth stage, thus limiting cell growth and antibody synthesis. Based on these findings, fed-batch and perfusion processes in the bioreactor were successfully developed with a balanced amino acid feed strategy. Fed-batch and especially perfusion culture effectively maintained high cell viability to prolong the culture process. Furthermore, perfusion cultures maximized the efficiency of nutrient utilization; the mean yield coefficient of antibody to consumed glucose was 44.72 mg/g and the mean yield coefficient of glutamine to antibody was 721.40 mg/g. Finally, in small-scale bioreactor culture, the highest total amount of C12 antibody (1,854 mg) was realized in perfusion cultures. Therefore, perfusion culture appears to be the optimal process for small-scale production of C12 antibody by rCHO-C12 cells.
Tanguy, Audrey; Villot, Jonathan; Glaus, Mathias; Laforest, Valérie; Hausler, Robert
2017-06-01
Waste recovery is an integrated part of municipal solid waste management systems but its strategic planning is still challenging. In particular, the service area size of facilities is a sensitive issue since its calculation depends on various factors related to treatment technologies (output products) and territorial features (sources waste production and location). This work presents a systemic approach for the estimation of a chain's service area size, based on a balance between costs and recovery profits. The model assigns a recovery performance value to each source, which can be positive, neutral or negative. If it is positive, the source should be included in the facility's service area. Applied to the case of Montreal for food waste recovery by anaerobic digestion, the approach showed that at most 23 out of the 30 districts should be included in the service area, depending on the indicator, which represents around 127,000 t of waste recovered/year. Due to the systemic approach, these districts were not necessarily the closest to the facility. Moreover, for the Montreal case, changing the facility's location did not have a great influence on the optimal service area size, showing that the distance to the facility was not a decisive factor at this scale. However, replacing anaerobic digestion by a composting plant reduced the break-even transport distances and, thus, the number of sources worth collecting (around 68,500 t/year). In this way, the methodology, applied to different management strategies, gave a sense of the spatial dynamics involved in the recovery chain's design. The map of optimal supply obtained could be used to further analyse the feasibility of multi-site and/or multi-technology systems for the territory considered. Copyright © 2017 Elsevier Ltd. All rights reserved.
A review of supervised object-based land-cover image classification
NASA Astrophysics Data System (ADS)
Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue
2017-08-01
Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial vehicle) or agricultural sites where it also correlates with the number of targeted classes. More than 95.6% of studies involve an area less than 300 ha, and the spatial resolution of images is predominantly between 0 and 2 m. Furthermore, we identify some methods that may advance supervised object-based image classification. For example, deep learning and type-2 fuzzy techniques may further improve classification accuracy. Lastly, scientists are strongly encouraged to report results of uncertainty studies to further explore the effects of varied factors on supervised object-based image classification.
Bokhari, Awais; Yusup, Suzana; Chuah, Lai Fatt; Klemeš, Jiří Jaromír; Asif, Saira; Ali, Basit; Akbar, Majid Majeed; Kamil, Ruzaimah Nik M
2017-10-01
Chemical interesterification of rubber seed oil has been investigated for four different designed orifice devices in a pilot scale hydrodynamic cavitation (HC) system. Upstream pressure within 1-3.5bar induced cavities to intensify the process. An optimal orifice plate geometry was considered as plate with 1mm dia hole having 21 holes at 3bar inlet pressure. The optimisation results of interesterification were revealed by response surface methodology; methyl acetate to oil molar ratio of 14:1, catalyst amount of 0.75wt.% and reaction time of 20min at 50°C. HC is compared to mechanical stirring (MS) at optimised values. The reaction rate constant and the frequency factor of HC were 3.4-fold shorter and 3.2-fold higher than MS. The interesterified product was characterised by following EN 14214 and ASTM D 6751 international standards. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cui, Shuya; Wang, Tao; Hu, Xiaoli
2014-12-10
A new chiral ionic liquid was synthesized from (S)-1-phenylethylamine and it was studied by IR, Raman, polarimetry, NMR and X-ray crystal diffraction. Its vibrational spectral bands are precisely ascribed to the studied structure with the aid of DFT theoretical calculations. The optimized geometries and calculated vibrational frequencies are evaluated via comparison with experimental values. The vibrational spectral data obtained from IR and Raman spectra are assigned based on the results of the theoretical calculations by the DFT-B3LYP method at 6-311G(d,p) level. The computed vibrational frequencies were scaled by scale factors to yield a good agreement with observed experimental vibrational frequencies.The vibrational modes assignments were performed by using the animation option of GaussView5.0 graphical interface for Gaussian program. Copyright © 2014 Elsevier B.V. All rights reserved.
Efficient Schmidt number scaling in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Krafnick, Ryan C.; García, Angel E.
2015-12-01
Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.
Lavado Contador, J F; Maneta, M; Schnabel, S
2006-10-01
The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.
Precision gravity measurement utilizing Accelerex vibrating beam accelerometer technology
NASA Astrophysics Data System (ADS)
Norling, Brian L.
Tests run using Sundstrand vibrating beam accelerometers to sense microgravity are described. Lunar-solar tidal effects were used as a highly predictable signal which varies by approximately 200 billionths of the full-scale gravitation level. Test runs of 48-h duration were used to evaluate stability, resolution, and noise. Test results on the Accelerex accelerometer show accuracies suitable for precision applications such as gravity mapping and gravity density logging. The test results indicate that Accelerex technology, even with an instrument design and signal processing approach not optimized for microgravity measurement, can achieve 48-nano-g (1 sigma) or better accuracy over a 48-h period. This value includes contributions from instrument noise and random walk, combined bias and scale factor drift, and thermal modeling errors as well as external contributions from sampling noise, test equipment inaccuracies, electrical noise, and cultural noise induced acceleration.
Capillary Corner Flows With Partial and Nonwetting Fluids
NASA Technical Reports Server (NTRS)
Bolleddula, D. A.; Weislogel, M. M.
2009-01-01
Capillary flow in containers or conduits with interior corners are common place in nature and industry. The majority of investigations addressing such flows solve the problem numerically in terms of a friction factor for flows along corners with contact angles below the Concus-Finn critical wetting condition for the particular conduit geometry of interest. This research effort provides missing numerical data for the flow resistance function F(sub i) for partially and nonwetting systems above the Concus-Finn condition. In such cases the fluid spontaneously de-wets the interior corner and often retracts into corner-bound drops. A banded numerical coefficient is desirable for further analysis and is achieved by careful selection of length scales x(sub s) and y(sub s) to nondimensionalize the problem. The optimal scaling is found to be identical to the wetting scaling, namely x(sub s) = H and y(sub s) = Htan (alpha), where H is the height from the corner to the free surface and a is the corner half-angle. Employing this scaling produces a relatively weakly varying flow resistance F(sub i) and for subsequent analyses is treated as a constant. Example solutions to steady and transient flow problems are provided that illustrate applications of this result.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
NASA Astrophysics Data System (ADS)
Bacopoulos, Peter
2018-05-01
A localized truncation error analysis with complex derivatives (LTEA+CD) is applied recursively with advanced circulation (ADCIRC) simulations of tides and storm surge for finite element mesh optimization. Mesh optimization is demonstrated with two iterations of LTEA+CD for tidal simulation in the lower 200 km of the St. Johns River, located in northeast Florida, and achieves more than an over 50% decrease in the number of mesh nodes, relating to a twofold increase in efficiency, at a zero cost to model accuracy. The recursively generated meshes using LTEA+CD lead to successive reductions in the global cumulative truncation error associated with the model mesh. Tides are simulated with root mean square error (RMSE) of 0.09-0.21 m and index of agreement (IA) values generally in the 80s and 90s percentage ranges. Tidal currents are simulated with RMSE of 0.09-0.23 m s-1 and IA values of 97% and greater. Storm tide due to Hurricane Matthew 2016 is simulated with RMSE of 0.09-0.33 m and IA values of 75-96%. Analysis of the LTEA+CD results shows the M2 constituent to dominate the node spacing requirement in the St. Johns River, with the M4 and M6 overtides and the STEADY constituent contributing some. Friction is the predominant physical factor influencing the target element size distribution, especially along the main river stem, while frequency (inertia) and Coriolis (rotation) are supplementary contributing factors. The combination of interior- and boundary-type computational molecules, providing near-full coverage of the model domain, renders LTEA+CD an attractive mesh generation/optimization tool for complex coastal and estuarine domains. The mesh optimization procedure using LTEA+CD is automatic and extensible to other finite element-based numerical models. Discussion is provided on the scope of LTEA+CD, the starting point (mesh) of the procedure, the user-specified scaling of the LTEA+CD results, and the iteration (termination) of LTEA+CD for mesh optimization.
Experimental Injury Biomechanics of the Pediatric Thorax and Abdomen
NASA Astrophysics Data System (ADS)
Kent, Richard; Ivarsson, Johan; Maltese, Matthew R.
Motor vehicle crashes are the leading cause of death and injury for children in the United States. Pediatric anthropomorphic test devices (ATD) and computational models are important tools for the evaluation and optimization of automotive restraint systems for child occupants. The thorax interacts with the restraints within the vehicle, and any thoracic model must mimic this interaction in a biofidelic manner to ensure that restraint designs protect humans as intended. To define thoracic biofidelity for adults, Kroell et al. (1974) conducted blunt impacts to the thoraces of adult postmortem human subjects (PMHS), which have formed the basis for biofidelity standards for modern adult ATD thoraces (Mertz et al. 1989). The paucity of pediatric PMHS for impact research led to the development of pediatric model biofidelity requirements through scaling. Geometric scale factors and elastic moduli of skull and long bone have been used to scale the adult thoracic biofidelity responses to the 3-, 6-, and 10-year-old child (Irwin and Mertz 1997; Mertz et al. 2001; van Ratingen et al. 1997). There is currently a need for data that apply to the child without scaling, both for validation of scaling methods used in the past and to confirm the validity of the specifications currently used to develop models of the child.
Comparison of Test and Finite Element Analysis for Two Full-Scale Helicopter Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta,Lucas G.
2011-01-01
Finite element analyses have been performed for two full-scale crash tests of an MD-500 helicopter. The first crash test was conducted to evaluate the performance of a composite deployable energy absorber under combined flight loads. In the second crash test, the energy absorber was removed to establish the baseline loads. The use of an energy absorbing device reduced the impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to analytical results. Details of the full-scale crash tests and development of the system-integrated finite element model are briefly described along with direct comparisons of acceleration magnitudes and durations for the first full-scale crash test. Because load levels were significantly different between tests, models developed for the purposes of predicting the overall system response with external energy absorbers were not adequate under more severe conditions seen in the second crash test. Relative error comparisons were inadequate to guide model calibration. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used for the second full-scale crash test. The calibrated parameter set reduced 2-norm prediction error by 51% but did not improve impact shape orthogonality.
Leon, Jaime; Medina-Garrido, Elena; Núñez, Juan L
2017-01-01
Math achievement and engagement declines in secondary education; therefore, educators are faced with the challenge of engaging students to avoid school failure. Within self-determination theory, we address the need to assess comprehensively student perceptions of teaching quality that predict engagement and achievement. In study one we tested, in a sample of 548 high school students, a preliminary version of a scale to assess nine factors: teaching for relevance, acknowledge negative feelings, participation encouragement, controlling language, optimal challenge, focus on the process, class structure, positive feedback, and caring. In the second study, we analyzed the scale's reliability and validity in a sample of 1555 high school students. The scale showed evidence of reliability, and with regard to criterion validity, at the classroom level, teaching quality was a predictor of behavioral engagement, and higher grades were observed in classes where students, as a whole, displayed more behavioral engagement. At the within level, behavioral engagement was associated with achievement. We not only provide a reliable and valid method to assess teaching quality, but also a method to design interventions, these could be designed based on the scale items to encourage students to persist and display more engagement on school duties, which in turn bolsters student achievement.
Mache, Stefanie; Bernburg, Monika; Vitzthum, Karin; Groneberg, David A; Klapp, Burghard F; Danzer, Gerhard
2015-01-01
Objectives This study developed and tested a research model that examined the effects of working conditions and individual resources on work–family conflict (WFC) using data collected from physicians working at German clinics. Material and methods This is a cross-sectional study of 727 physicians working in German hospitals. The work environment, WFC and individual resources were measured by the Copenhagen Psychosocial Questionnaire, the WFC Scale, the Brief Resilient Coping Scale and the Questionnaire for Self-efficacy, Optimism and Pessimism. Descriptive, correlation and linear regression analyses were applied. Results Clinical doctors working in German hospitals perceived high levels of WFC (mean=76). Sociodemographic differences were found for age, marital status and presence of children with regard to WFC. No significant gender differences were found. WFCs were positively related to high workloads and quantitative job demands. Job resources (eg, influence at work, social support) and personal resources (eg, resilient coping behaviour and self-efficacy) were negatively associated with physicians’ WFCs. Interaction terms suggest that job and personal resources buffer the effects of job demands on WFC. Conclusions In this study, WFC was prevalent among German clinicians. Factors of work organisation as well as factors of interpersonal relations at work were identified as significant predictors for WFC. Our results give a strong indication that both individual and organisational factors are related to WFC. Results may play an important role in optimising clinical care. Practical implications for physicians’ career planning and recommendations for future research are discussed. PMID:25941177
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Ascent performance feasibility for next-generation spacecraft
NASA Astrophysics Data System (ADS)
Mancuso, Salvatore Massimo
This thesis deals with the optimization of the ascent trajectories for single-stage suborbital (SSSO), single-stage-to-orbit (SSTO), and two-stage-to-orbit (TSTO) rocket-powered spacecraft. The maximum payload weight problem has been solved using the sequential gradient-restoration algorithm. For the TSTO case, some modifications to the original version of the algorithm have been necessary in order to deal with discontinuities due to staging and the fact that the functional being minimized depends on interface conditions. The optimization problem is studied for different values of the initial thrust-to-weight ratio in the range 1.3 to 1.6, engine specific impulse in the range 400 to 500 sec, and spacecraft structural factor in the range 0.08 to 0.12. For the TSTO configuration, two subproblems are studied: uniform structural factor between stages and nonuniform structural factor between stages. Due to the regular behavior of the results obtained, engineering approximations have been developed which connect the maximum payload weight to the engine specific impulse and spacecraft structural factor; in turn, this leads to useful design considerations. Also, performance sensitivity to the scale of the aerodynamic drag is studied, and it is shown that its effect on payload weight is relatively small, even for drag changes approaching ± 50%. The main conclusions are that: the design of a SSSO configuration appears to be feasible; the design of a SSTO configuration might be comfortably feasible, marginally feasible, or unfeasible, depending on the parameter values assumed; the design of a TSTO configuration is not only feasible, but its payload appears to be considerably larger than that of a SSTO configuration. Improvements in engine specific impulse and spacecraft structural factor are desirable and crucial for SSTO feasibility; indeed, it appears that aerodynamic improvements do not yield significant improvements in payload weight.
Ma, Yiqiu; Danilishin, Shtefan L; Zhao, Chunnong; Miao, Haixing; Korth, W Zach; Chen, Yanbei; Ward, Robert L; Blair, D G
2014-10-10
We propose using optomechanical interaction to narrow the bandwidth of filter cavities for achieving frequency-dependent squeezing in advanced gravitational-wave detectors, inspired by the idea of optomechanically induced transparency. This can allow us to achieve a cavity bandwidth on the order of 100 Hz using small-scale cavities. Additionally, in contrast to a passive Fabry-Pérot cavity, the resulting cavity bandwidth can be dynamically tuned, which is useful for adaptively optimizing the detector sensitivity when switching amongst different operational modes. The experimental challenge for its implementation is a stringent requirement for very low thermal noise of the mechanical oscillator, which would need a superb mechanical quality factor and a very low temperature. We consider one possible setup to relieve this requirement by using optical dilution to enhance the mechanical quality factor.
Machine Learning, deep learning and optimization in computer vision
NASA Astrophysics Data System (ADS)
Canu, Stéphane
2017-03-01
As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.
Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...
2013-07-18
The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.
Specifying the non-specific components of acupuncture analgesia
Vase, Lene; Baram, Sara; Takakura, Nobuari; Yajima, Hiroyoshi; Takayama, Miho; Kaptchuk, Ted J.; Schou, Søren; Jensen, Troels Staehelin; Zachariae, Robert; Svensson, Peter
2014-01-01
It is well known that acupuncture has pain-relieving effects, but the contribution of specific and especially non-specific factors to acupuncture analgesia is less clear. One hundred and one patients who developed pain ≥ 3 on a visual analog scale (VAS, 0-10) following third molar surgery were randomized to receive active acupuncture, placebo acupuncture, or no treatment for 30 min with acupuncture needles with potential for double-blinding. Patients’ perception of the treatment (active or placebo), and expected pain levels (VAS) were assessed prior to and halfway through the treatment. Looking at actual treatment allocation, there was no specific effect of active acupuncture (P = 0.240), but a large and significant non-specific effect of placebo acupuncture (P < 0.001), which increased over time. Interestingly, however, looking at perceived treatment allocation, there was a significant effect of acupuncture (P < 0.001) indicating that patients who believed they received active acupuncture had significantly lower pain levels than those who believed they received placebo acupuncture. Expected pain levels accounted for significant and progressively larger amounts of the variance in pain ratings following both active and placebo acupuncture (up to 69.8%), This is the first study to show that under optimized blinding conditions non-specific factors such as patients’ perception of and expectations toward treatment are central to the efficacy of acupuncture analgesia and that these factors may contribute to self-reinforcing effects in acupuncture treatment To obtain an effect of acupuncture in clinical practice it may, therefore, be important to incorporate and optimize these factors. PMID:23707680
Impact of treatment heterogeneity on drug resistance and supply chain costs☆
Spiliotopoulou, Eirini; Boni, Maciej F.; Yadav, Prashant
2013-01-01
The efficacy of scarce drugs for many infectious diseases is threatened by the emergence and spread of resistance. Multiple studies show that available drugs should be used in a socially optimal way to contain drug resistance. This paper studies the tradeoff between risk of drug resistance and operational costs when using multiple drugs for a specific disease. Using a model for disease transmission and resistance spread, we show that treatment with multiple drugs, on a population level, results in better resistance-related health outcomes, but more interestingly, the marginal benefit decreases as the number of drugs used increases. We compare this benefit with the corresponding change in procurement and safety stock holding costs that result from higher drug variety in the supply chain. Using a large-scale simulation based on malaria transmission dynamics, we show that disease prevalence seems to be a less important factor when deciding the optimal width of drug assortment, compared to the duration of one episode of the disease and the price of the drug(s) used. Our analysis shows that under a wide variety of scenarios for disease prevalence and drug cost, it is optimal to simultaneously deploy multiple drugs in the population. If the drug price is high, large volume purchasing discounts are available, and disease prevalence is high, it may be optimal to use only one drug. Our model lends insights to policy makers into the socially optimal size of drug assortment for a given context. PMID:25843982
Impact of treatment heterogeneity on drug resistance and supply chain costs.
Spiliotopoulou, Eirini; Boni, Maciej F; Yadav, Prashant
2013-09-01
The efficacy of scarce drugs for many infectious diseases is threatened by the emergence and spread of resistance. Multiple studies show that available drugs should be used in a socially optimal way to contain drug resistance. This paper studies the tradeoff between risk of drug resistance and operational costs when using multiple drugs for a specific disease. Using a model for disease transmission and resistance spread, we show that treatment with multiple drugs, on a population level, results in better resistance-related health outcomes, but more interestingly, the marginal benefit decreases as the number of drugs used increases. We compare this benefit with the corresponding change in procurement and safety stock holding costs that result from higher drug variety in the supply chain. Using a large-scale simulation based on malaria transmission dynamics, we show that disease prevalence seems to be a less important factor when deciding the optimal width of drug assortment, compared to the duration of one episode of the disease and the price of the drug(s) used. Our analysis shows that under a wide variety of scenarios for disease prevalence and drug cost, it is optimal to simultaneously deploy multiple drugs in the population. If the drug price is high, large volume purchasing discounts are available, and disease prevalence is high, it may be optimal to use only one drug. Our model lends insights to policy makers into the socially optimal size of drug assortment for a given context.
Murphy, Maureen; Koohsari, Mohammad Javad; Badland, Hannah; Giles-Corti, Billie
2017-12-01
To investigate dietary intake, BMI and supermarket access at varying geographic scales and transport modes across areas of socio-economic disadvantage, and to evaluate the implementation of an urban planning policy that provides guidance on spatial access to supermarkets. Cross-sectional study used generalised estimating equations to investigate associations between supermarket density and proximity, vegetable and fruit intake and BMI at five geographic scales representing distances people travel to purchase food by varying transport modes. A stratified analysis by area-level disadvantage was conducted to detect optimal distances to supermarkets across socio-economic areas. Spatial distribution of supermarket and transport access was analysed using a geographic information system. Melbourne, Australia. Adults (n 3128) from twelve local government areas (LGA) across Melbourne. Supermarket access was protective of BMI for participants in high disadvantaged areas within 800 m (P=0·040) and 1000 m (P=0·032) road network buffers around the household but not for participants in less disadvantaged areas. In urban growth area LGA, only 26 % of dwellings were within 1 km of a supermarket, far less than 80-90 % of dwellings suggested in the local urban planning policy. Low public transport access compounded disadvantage. Rapid urbanisation is a global health challenge linked to increases in dietary risk factors and BMI. Our findings highlight the importance of identifying the most appropriate geographic scale to inform urban planning policy for optimal health outcomes across socio-economic strata. Urban planning policy implementation in disadvantaged areas within cities has potential for reducing health inequities.
NASA Astrophysics Data System (ADS)
Lu, Mengqian; Chen, Shing; Babanova, Sofia; Phadke, Sujal; Salvacion, Michael; Mirhosseini, Auvid; Chan, Shirley; Carpenter, Kayla; Cortese, Rachel; Bretschger, Orianna
2017-07-01
Microbial fuel cells (MFCs) have been shown as a promising technology for wastewater treatment. Integration of MFCs into current wastewater treatment plant have potential to reduce the operational cost and improve the treatment performance, and scaling up MFCs will be essential. However, only a few studies have reported successful scale up attempts. Fabrication cost, treatment performance and operational lifetime are critical factors to optimize before commercialization of MFCs. To test these factors, we constructed a 20 L MFC system containing two 10 L MFC reactors and operated the system with brewery wastewater for nearly one year. Several operational conditions were tested, including different flowrates, applied external resistors, and poised anodic potentials. The condition resulting in the highest chemical oxygen demand (COD) removal efficiency (94.6 ± 1.0%) was a flow rate of 1 mL min-1 (HRT = 313 h) and an applied resistor of 10 Ω across each MFC circuit. Results from each of the eight stages of operation (325 days total) indicate that MFCs can sustain treatment rates over a long-term period and are robust enough to sustain performance even after system perturbations. possible ways to improve MFC performance were discussed for future studies.
Synergistic effects of food and predators on annual reproductive success in song sparrows.
Zanette, Liana; Smith, James N M; van Oort, Harry; Clinchy, Michael
2003-04-22
The behaviour literature is full of studies showing that animals in every taxon balance the probability of acquiring food with the risk of being preyed upon. While interactions between food and predators clearly operate at an individual scale, population-scale studies have tended to focus on only one factor at a time. Consequently, interactive (or 'synergistic') effects of food and predators on whole populations have only twice before been experimentally demonstrated in mammals. We conducted a 2 x 2 experiment to examine the joint effects of food supply and predator pressure on the annual reproductive success of song sparrows (Melospiza melodia). Our results show that these two factors do not operate in an additive way, but instead have a synergistic effect on reproduction. Relative to controls, sparrows reared 1.1 more young when food was added and 1.3 more when predator pressure was low. When these treatments were combined 4.0 extra young were produced, almost twice as many as expected from an additive model. These results are a cause for optimism for avian conservation because they demonstrate that remedial actions, aimed at simultaneously augmenting food and reducing predators, can produce dramatic increases in reproductive success.
NASA Astrophysics Data System (ADS)
Wang, Han; Silva, Eduardo; West, Damien; Sun, Yiyang; Restrepo, Oscar; Zhang, Shengbai; Kota, Murali
As scaling of semiconductor devices is pursued in order to improve power efficiency, quantum effects due to the reduced dimensions on devices have become dominant factors in power, performance, and area scaling. In particular, source/drain contact resistance has become a limiting factor in the overall device power efficiency and performance. As a consequence, techniques such as heavy doping of source and drain have been explored to reduce the contact resistance, thereby shrinking the width of depletion region and lowering the Schottky barrier height. In this work, we study the relation between doping in Silicon and the Schottky barrier of a TiSi2/Si interface with first-principles calculation. Virtual Crystal Approximation (VCA) is used to calculate the average potential of the interface with varying doping concentration, while the I-V curve for the corresponding interface is calculated with a generalized one-dimensional transfer matrix method. The relation between substitutional and interstitial Boron and Phosphorus dopant near the interface, and their effect on tuning the Schottky barrier is studied. These studies provide insight to the type of doping and the effect of dopant segregation to optimize metal-semiconductor interface resistance.
Vazquez-Anderson, Jorge; Mihailovic, Mia K; Baldridge, Kevin C; Reyes, Kristofer G; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B; Contreras, Lydia M
2017-05-19
Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA-RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA-RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA-mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Mendoza-Carranza, Manuel; Ejarque, Elisabet; Nagelkerke, Leopold A J
2018-01-01
Tropical small-scale fisheries are typical for providing complex multivariate data, due to their diversity in fishing techniques and highly diverse species composition. In this paper we used for the first time a supervised Self-Organizing Map (xyf-SOM), to recognize and understand the internal heterogeneity of a tropical marine small-scale fishery, using as model the fishery fleet of San Pedro port, Tabasco, Mexico. We used multivariate data from commercial logbooks, including the following four factors: fish species (47), gear types (bottom longline, vertical line+shark longline and vertical line), season (cold, warm), and inter-annual variation (2007-2012). The size of the xyf-SOM, a fundamental characteristic to improve its predictive quality, was optimized for the minimum distance between objects and the maximum prediction rate. The xyf-SOM successfully classified individual fishing trips in relation to the four factors included in the model. Prediction percentages were high (80-100%) for bottom longline and vertical line + shark longline, but lower prediction values were obtained for vertical line (51-74%) fishery. A confusion matrix indicated that classification errors occurred within the same fishing gear. Prediction rates were validated by generating confidence interval using bootstrap. The xyf-SOM showed that not all the fishing trips were targeting the most abundant species and the catch rates were not symmetrically distributed around the mean. Also, the species composition is not homogeneous among fishing trips. Despite the complexity of the data, the xyf-SOM proved to be an excellent tool to identify trends in complex scenarios, emphasizing the diverse and complex patterns that characterize tropical small scale-fishery fleets.
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
Structural optimization of framed structures using generalized optimality criteria
NASA Technical Reports Server (NTRS)
Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.
1989-01-01
The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.
Growing optimal scale-free networks via likelihood
NASA Astrophysics Data System (ADS)
Small, Michael; Li, Yingying; Stemler, Thomas; Judd, Kevin
2015-04-01
Preferential attachment, by which new nodes attach to existing nodes with probability proportional to the existing nodes' degree, has become the standard growth model for scale-free networks, where the asymptotic probability of a node having degree k is proportional to k-γ. However, the motivation for this model is entirely ad hoc. We use exact likelihood arguments and show that the optimal way to build a scale-free network is to attach most new links to nodes of low degree. Curiously, this leads to a scale-free network with a single dominant hub: a starlike structure we call a superstar network. Asymptotically, the optimal strategy is to attach each new node to one of the nodes of degree k with probability proportional to 1/N +ζ (γ ) (k+1 ) γ (in a N node network): a stronger bias toward high degree nodes than exhibited by standard preferential attachment. Our algorithm generates optimally scale-free networks (the superstar networks) as well as randomly sampling the space of all scale-free networks with a given degree exponent γ . We generate viable realization with finite N for 1 ≪γ <2 as well as γ >2 . We observe an apparently discontinuous transition at γ ≈2 between so-called superstar networks and more treelike realizations. Gradually increasing γ further leads to reemergence of a superstar hub. To quantify these structural features, we derive a new analytic expression for the expected degree exponent of a pure preferential attachment process and introduce alternative measures of network entropy. Our approach is generic and can also be applied to an arbitrary degree distribution.
NASA Astrophysics Data System (ADS)
Lian, Enyang; Ren, Yingyu; Han, Yunfeng; Liu, Weixin; Jin, Ningde; Zhao, Junying
2016-11-01
The multi-scale analysis is an important method for detecting nonlinear systems. In this study, we carry out experiments and measure the fluctuation signals from a rotating electric field conductance sensor with eight electrodes. We first use a recurrence plot to recognise flow patterns in vertical upward gas-liquid two-phase pipe flow from measured signals. Then we apply a multi-scale morphological analysis based on the first-order difference scatter plot to investigate the signals captured from the vertical upward gas-liquid two-phase flow loop test. We find that the invariant scaling exponent extracted from the multi-scale first-order difference scatter plot with the bisector of the second-fourth quadrant as the reference line is sensitive to the inhomogeneous distribution characteristics of the flow structure, and the variation trend of the exponent is helpful to understand the process of breakup and coalescence of the gas phase. In addition, we explore the dynamic mechanism influencing the inhomogeneous distribution of the gas phase in terms of adaptive optimal kernel time-frequency representation. The research indicates that the system energy is a factor influencing the distribution of the gas phase and the multi-scale morphological analysis based on the first-order difference scatter plot is an effective method for indicating the inhomogeneous distribution of the gas phase in gas-liquid two-phase flow.
Directional Convexity and Finite Optimality Conditions.
1984-03-01
system, Necessary Conditions for optimality. Work Unit Number 5 (Optimization and Large Scale Systems) *Istituto di Matematica Applicata, Universita...that R(T) is convex would then imply x(u,T) e int R(T). Cletituto di Matematica Applicata, Universita di Padova, 35100 ITALY. Sponsored by the United
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez Díez, Ana Luisa, E-mail: a.martinez@itma.es; Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstr. 2, 79110 Freiburg; Gutmann, Johannes
In this paper, we present a concentrator system based on a stack of fluorescent concentrators (FCs) and a bifacial solar cell. Coupling bifacial solar cells to a stack of FCs increases the performance of the system and preserves its efficiency when scaled. We used an approach to optimize a fluorescent solar concentrator system design based on a stack of multiple fluorescent concentrators (FC). Seven individual fluorescent collectors (20 mm×20 mm×2 mm) were realized by in-situ polymerization and optically characterized in regard to their ability to guide light to the edges. Then, an optimization procedure based on the experimental data ofmore » the individual FCs was carried out to determine the stack configuration that maximizes the total number of photons leaving edges. Finally, two fluorescent concentrator systems were realized by attaching bifacial silicon solar cells to the optimized FC stacks: a conventional system, where FC were attached to one side of the solar cell as a reference, and the proposed bifacial configuration. It was found that for the same overall FC area, the bifacial configuration increases the short-circuit current by a factor of 2.2, which is also in agreement with theoretical considerations.« less