Sample records for achieve optimal results

  1. Optimization of spatiotemporally fractionated radiotherapy treatments with bounds on the achievable benefit

    NASA Astrophysics Data System (ADS)

    Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid

    2018-01-01

    achievable mean liver BED. The results indicate that spatiotemporal treatments can achieve substantial reductions in normal tissue dose and BED, and that local optimization techniques provide high-quality plans that are close to realizing the maximum potential normal tissue dose reduction.

  2. The influence of optimism and pessimism on student achievement in mathematics

    NASA Astrophysics Data System (ADS)

    Yates, Shirley M.

    2002-11-01

    Students' causal attributions are not only fundamental motivational variables but are also critical motivators of their persistence in learning. Optimism, pessimism, and achievement in mathematics were measured in a sample of primary and lower secondary students on two occasions. Although achievement in mathematics was most strongly related to prior achievement and grade level, optimism and pessimism were significant factors. In particular, students with a more generally pessimistic outlook on life had a lower level of achievement in mathematics over time. Gender was not a significant factor in achievement. The implications of these findings are discussed.

  3. Optimism versus Pessimism and Academic Achievement Evaluation

    ERIC Educational Resources Information Center

    Harpaz-Itay, Yifat; Kaniel, Shlomo

    2012-01-01

    This article integrates three central theories of optimism-pessimism (OP). The combination of the shared components of these theories--outcome expectancies, emotions, and behavioral intention--may produce an integrative academic achievement evaluation. Little has been written regarding the differentiation between general and domain-specific OP, a…

  4. The Effects of Academic Optimism on Elementary Reading Achievement

    ERIC Educational Resources Information Center

    Bevel, Raymona K.; Mitchell, Roxanne M.

    2012-01-01

    Purpose: The purpose of this paper is to explore the relationship between academic optimism (AO) and elementary reading achievement (RA). Design/methodology/approach: Using correlation and hierarchical linear regression, the authors examined school-level effects of AO on fifth grade reading achievement in 29 elementary schools in Alabama.…

  5. Collective Responsibility, Academic Optimism, and Student Achievement in Taiwan Elementary Schools

    ERIC Educational Resources Information Center

    Wu, Hsin-Chieh

    2012-01-01

    Previous research indicates that collective efficacy, faculty trust in students and parents, and academic emphasis together formed a single latent school construct, called academic optimism. In the U.S., academic optimism has been proven to be a powerful construct that could effectively predict student achievement even after controlling for…

  6. Translational Geroscience: Emphasizing function to achieve optimal longevity

    PubMed Central

    Seals, Douglas R.; Melov, Simon

    2014-01-01

    Among individuals, biological aging leads to cellular and organismal dysfunction and an increased risk of chronic degenerative diseases and disability. This sequence of events in combination with the projected increases in the number of older adults will result in a worldwide healthcare burden with dire consequences. Superimposed on this setting are the adults now reaching traditional retirement ages--the baby boomers--a group that wishes to remain active, productive and physically and cognitively fit as they grow older. Together, these conditions are producing an unprecedented demand for increased healthspan or what might be termed “optimal longevity”—to live long, but well. To meet this demand, investigators with interests in the biological aspects of aging from model organisms to human epidemiology (population aging) must work together within an interactive process that we describe as translational geroscience. An essential goal of this new investigational platform should be the optimization and preservation of physiological function throughout the lifespan, including integrative physical and cognitive function, which would serve to increase healthspan, compress morbidity and disability into a shorter period of late-life, and help achieve optimal longevity. To most effectively utilize this new approach, we must rethink how investigators and administrators working at different levels of the translational research continuum communicate and collaborate with each other, how best to train the next generation of scientists in this new field, and how contemporary biological-biomedical aging research should be organized and funded. PMID:25324468

  7. Optimized Delivery System Achieves Enhanced Endomyocardial Stem Cell Retention

    PubMed Central

    Behfar, Atta; Latere, Jean-Pierre; Bartunek, Jozef; Homsy, Christian; Daro, Dorothee; Crespo-Diaz, Ruben J.; Stalboerger, Paul G.; Steenwinckel, Valerie; Seron, Aymeric; Redfield, Margaret M.; Terzic, Andre

    2014-01-01

    Background Regenerative cell-based therapies are associated with limited myocardial retention of delivered stem cells. The objective of this study is to develop an endocardial delivery system for enhanced cell retention. Methods and Results Stem cell retention was simulated in silico using one and three-dimensional models of tissue distortion and compliance associated with delivery. Needle designs, predicted to be optimal, were accordingly engineered using nitinol – a nickel and titanium alloy displaying shape memory and super-elasticity. Biocompatibility was tested with human mesenchymal stem cells. Experimental validation was performed with species-matched cells directly delivered into Langendorff-perfused porcine hearts or administered percutaneously into the endocardium of infarcted pigs. Cell retention was quantified by flow cytometry and real time quantitative polymerase chain reaction methodology. Models, computing optimal distribution of distortion calibrated to favor tissue compliance, predicted that a 75°-curved needle featuring small-to-large graded side holes would ensure the highest cell retention profile. In isolated hearts, the nitinol curved needle catheter (C-Cath) design ensured 3-fold superior stem cell retention compared to a standard needle. In the setting of chronic infarction, percutaneous delivery of stem cells with C-Cath yielded a 37.7±7.1% versus 10.0±2.8% retention achieved with a traditional needle, without impact on biocompatibility or safety. Conclusions Modeling guided development of a nitinol-based curved needle delivery system with incremental side holes achieved enhanced myocardial stem cell retention. PMID:24326777

  8. Achieving optimal growth: lessons from simple metabolic modules

    NASA Astrophysics Data System (ADS)

    Goyal, Sidhartha; Chen, Thomas; Wingreen, Ned

    2009-03-01

    Metabolism is a universal property of living organisms. While the metabolic network itself has been well characterized, the logic of its regulation remains largely mysterious. Recent work has shown that growth rates of microorganisms, including the bacterium Escherichia coli, correlate well with optimal growth rates predicted by flux-balance analysis (FBA), a constraint-based computational method. How difficult is it for cells to achieve optimal growth? Our analysis of representative metabolic modules drawn from real metabolism shows that, in all cases, simple feedback inhibition allows nearly optimal growth. Indeed, product-feedback inhibition is found in every biosynthetic pathway and constitutes about 80% of metabolic regulation. However, we find that product-feedback systems designed to approach optimal growth necessarily produce large pool sizes of metabolites, with potentially detrimental effects on cells via toxicity and osmotic imbalance. Interestingly, the sizes of metabolite pools can be strongly restricted if the feedback inhibition is ultrasensitive (i.e. with high Hill coefficient). The need for ultrasensitive mechanisms to limit pool sizes may therefore explain some of the ubiquitous, puzzling complexity found in metabolic feedback regulation at both the transcriptional and post-transcriptional levels.

  9. Optimal dietary patterns designed from local foods to achieve maternal nutritional goals.

    PubMed

    Raymond, Jofrey; Kassim, Neema; Rose, Jerman W; Agaba, Morris

    2018-04-04

    Achieving nutritional requirements for pregnant and lactating mothers in rural households while maintaining the intake of local and culture-specific foods can be a difficult task. Deploying a linear goal programming approach can effectively generate optimal dietary patterns that incorporate local and culturally acceptable diets. The primary objective of this study was to determine whether a realistic and affordable diet that achieves nutritional goals for rural pregnant and lactating women can be formulated from locally available foods in Tanzania. A cross sectional study was conducted to assess dietary intakes of 150 pregnant and lactating women using a weighed dietary record (WDR), 24 h dietary recalls and a 7-days food record. A market survey was also carried out to estimate the cost per 100 g of edible portion of foods that are frequently consumed in the study population. Dietary survey and market data were then used to define linear programming (LP) model parameters for diet optimisation. All LP analyses were done using linear program solver to generate optimal dietary patterns. Our findings showed that optimal dietary patterns designed from locally available foods would improve dietary adequacy for 15 and 19 selected nutrients in pregnant and lactating women, respectively, but inadequacies remained for iron, zinc, folate, pantothenic acid, and vitamin E, indicating that these are problem nutrients (nutrients that did not achieve 100% of their RNIs in optimised diets) in the study population. These findings suggest that optimal use of local foods can improve dietary adequacy for rural pregnant and lactating women aged 19-50 years. However, additional cost-effective interventions are needed to ensure adequate intakes for the identified problem nutrients.

  10. Academic Optimism, Organizational Citizenship Behaviors, and Student Achievement at Charter Schools

    ERIC Educational Resources Information Center

    Guvercin, Mustafa

    2013-01-01

    The purpose of this study was to examine the relationship among academic optimism, Organizational Citizenship Behaviors (OCBs), and student achievement in college preparatory charter schools. A purposeful sample of elementary school teachers from college preparatory charter schools (N = 226) in southeast Texas was solicited to complete the…

  11. Designing optimal food intake patterns to achieve nutritional goals for Japanese adults through the use of linear programming optimization models.

    PubMed

    Okubo, Hitomi; Sasaki, Satoshi; Murakami, Kentaro; Yokoyama, Tetsuji; Hirota, Naoko; Notsu, Akiko; Fukui, Mitsuru; Date, Chigusa

    2015-06-06

    Simultaneous dietary achievement of a full set of nutritional recommendations is difficult. Diet optimization model using linear programming is a useful mathematical means of translating nutrient-based recommendations into realistic nutritionally-optimal food combinations incorporating local and culture-specific foods. We used this approach to explore optimal food intake patterns that meet the nutrient recommendations of the Dietary Reference Intakes (DRIs) while incorporating typical Japanese food selections. As observed intake values, we used the food and nutrient intake data of 92 women aged 31-69 years and 82 men aged 32-69 years living in three regions of Japan. Dietary data were collected with semi-weighed dietary record on four non-consecutive days in each season of the year (16 days total). The linear programming models were constructed to minimize the differences between observed and optimized food intake patterns while also meeting the DRIs for a set of 28 nutrients, setting energy equal to estimated requirements, and not exceeding typical quantities of each food consumed by each age (30-49 or 50-69 years) and gender group. We successfully developed mathematically optimized food intake patterns that met the DRIs for all 28 nutrients studied in each sex and age group. Achieving nutritional goals required minor modifications of existing diets in older groups, particularly women, while major modifications were required to increase intake of fruit and vegetables in younger groups of both sexes. Across all sex and age groups, optimized food intake patterns demanded greatly increased intake of whole grains and reduced-fat dairy products in place of intake of refined grains and full-fat dairy products. Salt intake goals were the most difficult to achieve, requiring marked reduction of salt-containing seasoning (65-80%) in all sex and age groups. Using a linear programming model, we identified optimal food intake patterns providing practical food choices and

  12. Reconstruction after complex facial trauma: achieving optimal outcome through multiple contemporary surgeries.

    PubMed

    Jaiswal, Rohit; Pu, Lee L Q

    2013-04-01

    Major facial trauma injuries often require complex repair. Traditionally, the reconstruction of such injuries has primarily utilized only free tissue transfer. However, the advent of newer, contemporary procedures may lead to potential reconstructive improvement through the use of complementary procedures after free flap reconstruction. An 18-year-old male patient suffered a major left facial degloving injury resulting in soft-tissue defect with exposed zygoma, and parietal bone. Multiple operations were undertaken in a staged manner for reconstruction. A state-of-the-art free anterolateral thigh (ALT) perforator flap and Medpor implant reconstruction of the midface were initially performed, followed by flap debulking, lateral canthopexy, midface lift with redo canthopexy, scalp tissue expansion for hairline reconstruction, and epidermal skin grafting for optimal skin color matching. Over a follow-up period of 2 years, a good and impressive reconstructive result was achieved through the use of multiple contemporary reconstructive procedures following an excellent free ALT flap reconstruction. Multiple staged reconstructions are essential in producing an optimal outcome in this complex facial injury that would likely not have been produced through a 1-stage traditional free flap reconstruction. Utilizing multiple, sequential contemporary surgeries may substantially improve outcome through the enhancement and refinement of results based on possibly the best initial soft-tissue reconstruction.

  13. Achieving Optimal Best: Instructional Efficiency and the Use of Cognitive Load Theory in Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Phan, Huy P.; Ngu, Bing H.; Yeung, Alexander S.

    2017-01-01

    We recently developed the "Framework of Achievement Bests" to explain the importance of effective functioning, personal growth, and enrichment of well-being experiences. This framework postulates a concept known as "optimal achievement best," which stipulates the idea that individuals may, in general, strive to achieve personal…

  14. Faculty Sense of Academic Optimism and Its Relationship to Students' Achievement in Well Performing High Schools

    ERIC Educational Resources Information Center

    Cromartie, Michael Tyrone

    2013-01-01

    The aim of this study was to determine the organizational characteristics and behaviors that contribute to sustaining a culture of academic optimism as a mechanism of student achievement. While there is a developing research base identifying both the individual elements of academic optimism as well as the academic optimism construct itself as…

  15. Green Infrastructure Simulation and Optimization to Achieve Combined Sewer Overflow Reductions in Philadelphia's Mill Creek Sewershed

    NASA Astrophysics Data System (ADS)

    Cohen, J. S.; McGarity, A. E.

    2017-12-01

    The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land

  16. Rejuvenation of the Aging Arm: Multimodal Combination Therapy for Optimal Results.

    PubMed

    Wu, Douglas C; Green, Jeremy B

    2016-05-01

    The aging arm is characterized by increased dyspigmentation, a proliferation of ectactic blood vessels, excessive adiposity, excessive skin laxity, and actinic keratosis. A variety of laser, energy, and surgical techniques can be used to improve these features. The objective of this article is to describe the treatment modalities that have proven efficacious in rejuvenating the aging arm and combination therapies that have the potential to optimize patient outcomes while maintaining safety and tolerability. A Medline search was performed on nonsurgical aesthetic combination treatments because it relates to arm rejuvenation, and results are summarized. Practical applications for these combination treatments are also discussed. Although there is significant evidence supporting the effective use of nonsurgical treatments for arm rejuvenation, little in the literature was found on the safety and efficacy of combining such procedures and devices. However, in the authors' clinical experience, combining arm rejuvenation techniques can be done safely and often result in optimal outcomes. Arm rejuvenation can be safely and effectively achieved with combination nonsurgical aesthetic treatments.

  17. Regulating thrombus growth and stability to achieve an optimal response to injury

    PubMed Central

    Brass, Lawrence F.; Wannemacher, Kenneth M.; Ma, Peisong; Stalker, Timothy J.

    2012-01-01

    An optimal platelet response to injury can be defined as one in which blood loss is restrained and haemostasis is achieved without the penalty of further tissue damage caused by unwarranted vascular occlusion. This brief review considers some of the ways in which thrombus growth and stability can be regulated so that an optimal platelet response can be achieved in vivo. Three related topics are considered. The first focuses on intracellular mechanisms that regulate the early events of platelet activation downstream of G protein coupled receptors for agonists such as thrombin, thromboxane A2 and ADP. The second considers the ways in which signalling events that are dependent on stable contacts between platelets can influence the state of platelet activation and thus affect thrombus growth and stability. The third focuses on the changes that are experienced by platelets as they move from their normal environment in freely-flowing plasma to a very different environment within the growing haemostatic plug, an environment in which the narrowing gaps and junctions between platelets not only facilitate communication, but also increasingly limit both the penetration of plasma and the exodus of platelet-derived bioactive molecules. PMID:21781243

  18. Use of allopurinol with low-dose 6-mercaptopurine in inflammatory bowel disease to achieve optimal active metabolite levels: A review of four cases and the literature

    PubMed Central

    Witte, Todd N; Ginsberg, Allen L

    2008-01-01

    BACKGROUND: At least one-third of patients with inflammatory bowel disease do not respond or are intolerant to therapy with 6-mercaptopurine (6-MP). A subgroup fails to attain optimal levels of 6-thioguanine nucleotide (6-TGN) and instead shunts to 6-methylmercaptopurine nucleotide (6-MMPN). PATIENTS AND METHODS: A retrospective chart review was conducted, and four patients are described who had been previously unable to achieve optimal 6-TGN metabolite levels until allopurinol was added to their treatment. RESULTS: All four patients achieved optimal 6-TGN levels and undetectable 6-MMPN with a mean 6-MP dose of 0.49 mg/kg. Three achieved steroid-free clinical remission. Two of those three patients had normalization of liver enzymes; one patient had baseline normal liver enzymes despite an initial 6-MMPN level of 27,369 pmol/8×108 red blood cells. Two patients experienced reversible leukopenia. CONCLUSIONS: Combination allopurinol and low-dose 6-MP is an effective means to achieve optimal metabolite levels and steroid-free clinical remission in previously refractory patients. Caution is advised. PMID:18299738

  19. Optimal formulations of local foods to achieve nutritional adequacy for 6–23-month-old rural Tanzanian children

    PubMed Central

    Raymond, Jofrey; Kassim, Neema; Rose, Jerman W.; Agaba, Morris

    2017-01-01

    ABSTRACT Background: Achieving nutritional goals of infants and young children while maintaining the intake of local and culture-specific foods can be a daunting task. Diet optimisation using linear goal programming (LP) can effectively generate optimal formulations incorporating local and culturally acceptable foods. Objective: The primary objective of this study was to determine whether a realistic and affordable diet that achieves dietary recommended intakes (DRIs) for 22 selected nutrients can be formulated for rural 6–23-month-old children in Tanzania. Design: Dietary intakes of 400 children aged 6–23 months were assessed using a weighed dietary record (WDR), 24-hour dietary recalls and a 7-days food record. A market survey was also carried out to estimate the cost per 100 g of edible portion of foods that are commonly consumed in the study area. Dietary and market survey data were then used to define LP model parameters for diet optimisation. All LP analyses were done using linear program solver (LiPS) version 1.9.4 to generate optimal food formulations. Results: Optimal formulations that achieved DRIs for 20 nutrients for children aged 6–11 months and all selected nutrients for children aged 12–23 months were successfully developed at a twofold cost of the observed food purchase across age groups. Optimal formulations contained a mixture of ingredients such as wholegrain cereals, Irish potatoes, pulses and seeds, fish and poultry meat as well as fruits and vegetables that can be sourced locally. Conclusions: Our findings revealed that given the available food choices, it is possible to develop optimal formulations that can improve dietary adequacy for rural 6–23-month-old children if food budget for the child’s diets is doubled. These findings suggest the need for setting alternative interventions which can help households increase access to nutrient-dense foods that can fill the identified nutrient gaps. PMID:28814951

  20. Using tailored methodical approaches to achieve optimal science outcomes

    NASA Astrophysics Data System (ADS)

    Wingate, Lory M.

    2016-08-01

    The science community is actively engaged in research, development, and construction of instrumentation projects that they anticipate will lead to new science discoveries. There appears to be very strong link between the quality of the activities used to complete these projects, and having a fully functioning science instrument that will facilitate these investigations.[2] The combination of using internationally recognized standards within the disciplines of project management (PM) and systems engineering (SE) has been demonstrated to lead to achievement of positive net effects and optimal project outcomes. Conversely, unstructured, poorly managed projects will lead to unpredictable, suboptimal project outcomes ultimately affecting the quality of the science that can be done with the new instruments. The proposed application of these two specific methodical approaches, implemented as a tailorable suite of processes, are presented in this paper. Project management (PM) is accepted worldwide as an effective methodology used to control project cost, schedule, and scope. Systems engineering (SE) is an accepted method that is used to ensure that the outcomes of a project match the intent of the stakeholders, or if they diverge, that the changes are understood, captured, and controlled. An appropriate application, or tailoring, of these disciplines can be the foundation upon which success in projects that support science can be optimized.

  1. Optimal achieved blood pressure in acute intracerebral hemorrhage: INTERACT2.

    PubMed

    Arima, Hisatomi; Heeley, Emma; Delcourt, Candice; Hirakawa, Yoichiro; Wang, Xia; Woodward, Mark; Robinson, Thompson; Stapf, Christian; Parsons, Mark; Lavados, Pablo M; Huang, Yining; Wang, Jiguang; Chalmers, John; Anderson, Craig S

    2015-02-03

    To investigate the effects of intensive blood pressure (BP) lowering according to baseline BP levels and optimal achieved BP levels in patients with acute intracerebral hemorrhage (ICH). INTERACT2 was an open, blinded endpoint, randomized controlled trial in 2,839 patients with ICH within 6 hours of onset and elevated systolic BP (SBP) (150-220 mm Hg) who were allocated to receive intensive (target SBP <140 mm Hg within 1 hour, with lower limit of 130 mm Hg for treatment cessation) or guideline-recommended (target SBP <180 mm Hg) BP-lowering treatment. Outcome was physical function across all 7 levels of the modified Rankin Scale at 90 days. Analysis of the randomized comparisons showed that intensive BP lowering produced comparable benefits on physical function at 90 days in 5 subgroups defined by baseline SBP of <160, 160-169, 170-179, 180-189, and ≥190 mm Hg (p homogeneity = 0.790). Analyses of achieved BP showed linear increases in the risk of physical dysfunction for achieved SBP above 130 mm Hg for both hyperacute (1-24 hours) and acute (2-7 days) phases while modest increases were also observed for achieved SBP below 130 mm Hg. Intensive BP lowering appears beneficial across a wide range of baseline SBP levels, and target SBP level of 130-139 mm Hg is likely to provide maximum benefit in acute ICH. This study provides Class I evidence that the effect of intensive BP lowering on physical function is not influenced by baseline BP. © 2014 American Academy of Neurology.

  2. Inverse-optimized 3D conformal planning: Minimizing complexity while achieving equivalence with beamlet IMRT in multiple clinical sites

    PubMed Central

    Fraass, Benedick A.; Steers, Jennifer M.; Matuszak, Martha M.; McShan, Daniel L.

    2012-01-01

    user-controllable search strategies which optimize plans without beamlet or pencil beam approximations. IO-3D allows comparisons of beamlet, multisegment, and conformal plans optimized using the same cost functions, dose points, and plan evaluation metrics, so quantitative comparisons are straightforward. Here, comparisons of IO-3D and beamlet IMRT techniques are presented for breast, brain, liver, and lung plans. Results: IO-3D achieves high quality results comparable to beamlet IMRT, for many situations. Though the IO-3D plans have many fewer degrees of freedom for the optimization, this work finds that IO-3D plans with only one to two segments per beam are dosimetrically equivalent (or nearly so) to the beamlet IMRT plans, for several sites. IO-3D also reduces plan complexity significantly. Here, monitor units per fraction (MU/Fx) for IO-3D plans were 22%–68% less than that for the 1 cm × 1 cm beamlet IMRT plans and 72%–84% than the 0.5 cm × 0.5 cm beamlet IMRT plans. Conclusions: The unique IO-3D algorithm illustrates that inverse planning can achieve high quality 3D conformal plans equivalent (or nearly so) to unconstrained beamlet IMRT plans, for many sites. IO-3D thus provides the potential to optimize flat or few-segment 3DCRT plans, creating less complex optimized plans which are efficient and simple to deliver. The less complex IO-3D plans have operational advantages for scenarios including adaptive replanning, cases with interfraction and intrafraction motion, and pediatric patients. PMID:22755717

  3. Periodic Application of Stochastic Cost Optimization Methodology to Achieve Remediation Objectives with Minimized Life Cycle Cost

    NASA Astrophysics Data System (ADS)

    Kim, U.; Parker, J.

    2016-12-01

    Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.

  4. Poor Results for High Achievers

    ERIC Educational Resources Information Center

    Bui, Sa; Imberman, Scott; Craig, Steven

    2012-01-01

    Three million students in the United States are classified as gifted, yet little is known about the effectiveness of traditional gifted and talented (G&T) programs. In theory, G&T programs might help high-achieving students because they group them with other high achievers and typically offer specially trained teachers and a more advanced…

  5. Knowledge Visualizations: A Tool to Achieve Optimized Operational Decision Making and Data Integration

    DTIC Science & Technology

    2015-06-01

    Hadoop Distributed File System (HDFS) without any integration with Accumulo-based Knowledge Stores based on OWL/RDF. 4. Cloud Based The Apache Software...BTW, 7(12), pp. 227–241. Godin, A. & Akins, D. (2014). Extending DCGS-N naval tactical clouds from in-storage to in-memory for the integrated fires...VISUALIZATIONS: A TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION by Paul C. Hudson Jeffrey A. Rzasa June 2015 Thesis

  6. Sharing Leadership Responsibilities Results in Achievement Gains

    ERIC Educational Resources Information Center

    Armistead, Lew

    2010-01-01

    Collective, not individual, leadership in schools has a greater impact on student achievement; when principals and teachers share leadership responsibilities, student achievement is higher; and schools having high student achievement also display a vision for student achievement and teacher growth. Those are just a few of the insights into school…

  7. Learning optimal embedded cascades.

    PubMed

    Saberian, Mohammad Javad; Vasconcelos, Nuno

    2012-10-01

    The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.

  8. Implementing optimal thinning strategies

    Treesearch

    Kurt H. Riitters; J. Douglas Brodie

    1984-01-01

    Optimal thinning regimes for achieving several management objectives were derived from two stand-growth simulators by dynamic programming. Residual mean tree volumes were then plotted against stand density management diagrams. The results supported the use of density management diagrams for comparing, checking, and implementing the results of optimization analyses....

  9. Optimal methotrexate dose is associated with better clinical outcomes than non-optimal dose in daily practice: results from the ESPOIR early arthritis cohort.

    PubMed

    Gaujoux-Viala, Cécile; Rincheval, Nathalie; Dougados, Maxime; Combe, Bernard; Fautrel, Bruno

    2017-12-01

    Although methotrexate (MTX) is the consensual first-line disease-modifying antirheumatic drug (DMARD) for rheumatoid arthritis (RA), substantial heterogeneity remains with its prescription and dosage, which are often not optimal. To evaluate the symptomatic and structural impact of optimal MTX dose in patients with early RA in daily clinical practice over 2 years. Patients included in the early arthritis ESPOIR cohort who fulfilled the ACR-EULAR (American College of Rheumatology/European League against Rheumatism) criteria for RA and received MTX as a first DMARD were assessed. Optimal MTX dose was defined as ≥10 mg/week during the first 3 months, with escalation to ≥20 mg/week or 0.3 mg/kg/week at 6 months without Disease Activity Score in 28 joints remission. Symptomatic and structural efficacy with and without optimal MTX dose was assessed by generalised logistic regression with adjustment for appropriate variables. Within the first year of follow-up, 314 patients (53%) with RA received MTX as a first DMARD (mean dose 12.2±3.8 mg/week). Only 26.4% (n=76) had optimal MTX dose. After adjustment, optimal versus non-optimal MTX dose was more efficient in achieving ACR-EULAR remission at 1 year (OR 4.28 (95% CI 1.86 to 9.86)) and normal functioning (Health Assessment Questionnaire ≤0.5; OR at 1 year 4.36 (95% CI 2.03 to 9.39)), with no effect on radiological progression. Results were similar during the second year. Optimal MTX dose is more efficacious than non-optimal dose for remission and function in early arthritis in daily practice, with no impact on radiological progression over 2 years. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Achieving definitive results in long-chain polyunsaturated fatty acid supplementation trials of term infants: factors for consideration.

    PubMed

    Meldrum, Suzanne J; Smith, Michael A; Prescott, Susan L; Hird, Kathryn; Simmer, Karen

    2011-04-01

    Numerous randomized controlled trials (RCTs) have been undertaken to determine whether supplementation with long-chain polyunsaturated fatty acids (LCPUFAs) in infancy would improve the developmental outcomes of term infants. The results of such trials have been thoroughly reviewed with no definitive conclusion as to the efficacy of LCPUFA supplementation. A number of reasons for the lack of conclusive findings in this area have been proposed. This review examines such factors with the aim of determining whether an optimal method of investigation for RCTs of LCPUFA supplementation in term infants can be ascertained from previous research. While more research is required to completely inform a method that is likely to achieve definitive results, the findings of this literature review indicate future trials should investigate the effects of sex, genetic polymorphisms, the specific effects of LCPUFAs, and the optimal tests for neurodevelopmental assessment. The current literature indicates a docosahexaenoic acid dose of 0.32%, supplementation from birth to 12 months, and a total sample size of at least 286 (143 per group) should be included in the methodology of future trials. © 2011 International Life Sciences Institute.

  11. School Counselors: Closing Achievement Gaps and Writing Results Reports

    ERIC Educational Resources Information Center

    Hartline, Julie; Cobia, Debra

    2012-01-01

    Charged with closing the achievement gap for marginalized students, school counselors need to be able to identify gaps, develop interventions, evaluate effectiveness, and share results. This study examined 100 summary results reports submitted by school counselors after having received four days of training on the ASCA National Model. Findings…

  12. Time-optimal control with finite bandwidth

    NASA Astrophysics Data System (ADS)

    Hirose, M.; Cappellaro, P.

    2018-04-01

    Time-optimal control theory provides recipes to achieve quantum operations with high fidelity and speed, as required in quantum technologies such as quantum sensing and computation. While technical advances have achieved the ultrastrong driving regime in many physical systems, these capabilities have yet to be fully exploited for the precise control of quantum systems, as other limitations, such as the generation of higher harmonics or the finite response time of the control apparatus, prevent the implementation of theoretical time-optimal control. Here we present a method to achieve time-optimal control of qubit systems that can take advantage of fast driving beyond the rotating wave approximation. We exploit results from time-optimal control theory to design driving protocols that can be implemented with realistic, finite-bandwidth control fields, and we find a relationship between bandwidth limitations and achievable control fidelity.

  13. Optimized survey design for electrical resistivity tomography: combined optimization of measurement configuration and electrode placement

    NASA Astrophysics Data System (ADS)

    Uhlemann, Sebastian; Wilkinson, Paul B.; Maurer, Hansruedi; Wagner, Florian M.; Johnson, Timothy C.; Chambers, Jonathan E.

    2018-07-01

    Within geoelectrical imaging, the choice of measurement configurations and electrode locations is known to control the image resolution. Previous work has shown that optimized survey designs can provide a model resolution that is superior to standard survey designs. This paper demonstrates a methodology to optimize resolution within a target area, while limiting the number of required electrodes, thereby selecting optimal electrode locations. This is achieved by extending previous work on the `Compare-R' algorithm, which by calculating updates to the resolution matrix optimizes the model resolution in a target area. Here, an additional weighting factor is introduced that allows to preferentially adding measurement configurations that can be acquired on a given set of electrodes. The performance of the optimization is tested on two synthetic examples and verified with a laboratory study. The effect of the weighting factor is investigated using an acquisition layout comprising a single line of electrodes. The results show that an increasing weight decreases the area of improved resolution, but leads to a smaller number of electrode positions. Imaging results superior to a standard survey design were achieved using 56 per cent fewer electrodes. The performance was also tested on a 3-D acquisition grid, where superior resolution within a target at the base of an embankment was achieved using 22 per cent fewer electrodes than a comparable standard survey. The effect of the underlying resistivity distribution on the performance of the optimization was investigated and it was shown that even strong resistivity contrasts only have minor impact. The synthetic results were verified in a laboratory tank experiment, where notable image improvements were achieved. This work shows that optimized surveys can be designed that have a resolution superior to standard survey designs, while requiring significantly fewer electrodes. This methodology thereby provides a means for

  14. Optimized survey design for Electrical Resistivity Tomography: combined optimization of measurement configuration and electrode placement

    NASA Astrophysics Data System (ADS)

    Uhlemann, Sebastian; Wilkinson, Paul B.; Maurer, Hansruedi; Wagner, Florian M.; Johnson, Timothy C.; Chambers, Jonathan E.

    2018-03-01

    Within geoelectrical imaging, the choice of measurement configurations and electrode locations is known to control the image resolution. Previous work has shown that optimized survey designs can provide a model resolution that is superior to standard survey designs. This paper demonstrates a methodology to optimize resolution within a target area, while limiting the number of required electrodes, thereby selecting optimal electrode locations. This is achieved by extending previous work on the `Compare-R' algorithm, which by calculating updates to the resolution matrix optimizes the model resolution in a target area. Here, an additional weighting factor is introduced that allows to preferentially adding measurement configurations that can be acquired on a given set of electrodes. The performance of the optimization is tested on two synthetic examples and verified with a laboratory study. The effect of the weighting factor is investigated using an acquisition layout comprising a single line of electrodes. The results show that an increasing weight decreases the area of improved resolution, but leads to a smaller number of electrode positions. Imaging results superior to a standard survey design were achieved using 56 per cent fewer electrodes. The performance was also tested on a 3D acquisition grid, where superior resolution within a target at the base of an embankment was achieved using 22 per cent fewer electrodes than a comparable standard survey. The effect of the underlying resistivity distribution on the performance of the optimization was investigated and it was shown that even strong resistivity contrasts only have minor impact. The synthetic results were verified in a laboratory tank experiment, where notable image improvements were achieved. This work shows that optimized surveys can be designed that have a resolution superior to standard survey designs, while requiring significantly fewer electrodes. This methodology thereby provides a means for improving

  15. Selective robust optimization: A new intensity-modulated proton therapy optimization strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yupeng; Niemela, Perttu; Siljamaki, Sami

    2015-08-15

    Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less

  16. A Study of the Relationships between Distributed Leadership, Teacher Academic Optimism and Student Achievement in Taiwanese Elementary Schools

    ERIC Educational Resources Information Center

    Chang, I-Hua

    2011-01-01

    The purpose of this study was to explore the relationships between distributed leadership, teachers' academic optimism and student achievement in learning. The study targeted public elementary schools in Taiwan and adopted stratified random sampling to investigate 1500 teachers. Teachers' perceptions were collected by a self-report scale. In…

  17. A computer-based measure of resultant achievement motivation.

    PubMed

    Blankenship, V

    1987-08-01

    Three experiments were conducted to develop a computer-based measure of individual differences in resultant achievement motivation (RAM) on the basis of level-of-aspiration, achievement motivation, and dynamics-of-action theories. In Experiment 1, the number of atypical shifts and greater responsiveness to incentives on 21 trials with choices among easy, intermediate, and difficult levels of an achievement-oriented game were positively correlated and were found to differentiate the 62 subjects (31 men, 31 women) on the amount of time they spent at a nonachievement task (watching a color design) 1 week later. In Experiment 2, test-retest reliability was established with the use of 67 subjects (15 men, 52 women). Point and no-point trials were offered in blocks, with point trials first for half the subjects and no-point trials first for the other half. Reliability was higher for the atypical-shift measure than for the incentive-responsiveness measure and was higher when points were offered first. In Experiment 3, computer anxiety was manipulated by creating a simulated computer breakdown in the experimental condition. Fifty-nine subjects (13 men, 46 women) were randomly assigned to the experimental condition or to one of two control conditions (an interruption condition and a no-interruption condition). Subjects with low RAM, as demonstrated by a low number of typical shifts, took longer to choose the achievement-oriented task, as predicted by the dynamics-of-action theory. The difference was evident in all conditions and most striking in the computer-breakdown condition. A change of focus from atypical to typical shifts is discussed.

  18. A system dynamics optimization framework to achieve population desired of average weight target

    NASA Astrophysics Data System (ADS)

    Abidin, Norhaslinda Zainal; Zulkepli, Jafri Haji; Zaibidi, Nerda Zura

    2017-11-01

    Obesity is becoming a serious problem in Malaysia as it has been rated as the highest among Asian countries. The aim of the paper is to propose a system dynamics (SD) optimization framework to achieve population desired weight target based on the changes in physical activity behavior and its association to weight and obesity. The system dynamics approach of stocks and flows diagram was used to quantitatively model the impact of both behavior on the population's weight and obesity trends. This work seems to bring this idea together and highlighting the interdependence of the various aspects of eating and physical activity behavior on the complex of human weight regulation system. The model was used as an experimentation vehicle to investigate the impacts of changes in physical activity on weight and prevalence of obesity implications. This framework paper provides evidence on the usefulness of SD optimization as a strategic decision making approach to assist in decision making related to obesity prevention. SD applied in this research is relatively new in Malaysia and has a high potential to apply to any feedback models that address the behavior cause to obesity.

  19. The Relationship of Mental Pressure with Optimism and Academic Achievement Motivation among Second Grade Male High School Students

    ERIC Educational Resources Information Center

    Sarouni, Ali Sedigh; Jenaabadi, Hossein; Pourghaz, Abdulwahab

    2016-01-01

    The present study aimed to examine the relationship of mental pressure with optimism and academic achievement motivation among second grade second period male high school students. This study followed a descriptive-correlational method. The sample included 200 second grade second period male high school students in Sooran. Data collection tools in…

  20. Goal Setting to Achieve Results

    ERIC Educational Resources Information Center

    Newman, Rich

    2012-01-01

    Both districts and individual schools have a very clear set of goals and skills for their students to achieve and master. In fact, except in rare cases, districts and schools develop very detailed goals they wish to pursue. In most cases, unfortunately, only the teachers and staff at a particular school or district-level office are aware of the…

  1. Optimism bias leads to inconclusive results - an empirical study

    PubMed Central

    Djulbegovic, Benjamin; Kumar, Ambuj; Magazin, Anja; Schroen, Anneke T.; Soares, Heloisa; Hozo, Iztok; Clarke, Mike; Sargent, Daniel; Schell, Michael J.

    2010-01-01

    Objective Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully, and explored whether poor accrual or optimism bias is responsible for inconclusive results. Study Design Systematic review Setting Retrospective analysis of a consecutive series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. Results 359 trials (374 comparisons) enrolling 150,232 patients were analyzed. 70% (262/374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273/374) of studies. Investigators’ judgments and statistical inferences were concordant in 75% (279/374) of trials. Investigators consistently overestimated their expected treatment effects, but to a significantly larger extent for inconclusive trials. The median ratio of expected over observed hazard ratio or odds ratio was 1.34 (range 0.19 – 15.40) in conclusive trials compared to 1.86 (range 1.09 – 12.00) in inconclusive studies (p<0.0001). Only 17% of the trials had treatment effects that matched original researchers’ expectations. Conclusion Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results. PMID:21163620

  2. SU-D-206-01: Employing a Novel Consensus Optimization Strategy to Achieve Iterative Cone Beam CT Reconstruction On a Multi-GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z

    Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less

  3. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  4. Optimizing the Entrainment Geometry of a Dry Powder Inhaler: Methodology and Preliminary Results.

    PubMed

    Kopsch, Thomas; Murnane, Darragh; Symons, Digby

    2016-11-01

    For passive dry powder inhalers (DPIs) entrainment and emission of the aerosolized drug dose depends strongly on device geometry and the patient's inhalation manoeuvre. We propose a computational method for optimizing the entrainment part of a DPI. The approach assumes that the pulmonary delivery location of aerosol can be determined by the timing of dose emission into the tidal airstream. An optimization algorithm was used to iteratively perform computational fluid dynamic (CFD) simulations of the drug emission of a DPI. The algorithm seeks to improve performance by changing the device geometry. Objectives were to achieve drug emission that was: A) independent of inhalation manoeuvre; B) similar to a target profile. The simulations used complete inhalation flow-rate profiles generated dependent on the device resistance. The CFD solver was OpenFOAM with drug/air flow simulated by the Eulerian-Eulerian method. To demonstrate the method, a 2D geometry was optimized for inhalation independence (comparing two breath profiles) and an early-bolus delivery. Entrainment was both shear-driven and gas-assisted. Optimization for a delay in the bolus delivery was not possible with the chosen geometry. Computational optimization of a DPI geometry for most similar drug delivery has been accomplished for an example entrainment geometry.

  5. Optimal allocation of land and water resources to achieve Water, Energy and Food Security in the upper Blue Nile basin

    NASA Astrophysics Data System (ADS)

    Allam, M.; Eltahir, E. A. B.

    2017-12-01

    Rapid population growth, hunger problems, increasing energy demands, persistent conflicts between the Nile basin riparian countries and the potential impacts of climate change highlight the urgent need for the conscious stewardship of the upper Blue Nile (UBN) basin resources. This study develops a framework for the optimal allocation of land and water resources to agriculture and hydropower production in the UBN basin. The framework consists of three optimization models that aim to: (a) provide accurate estimates of the basin water budget, (b) allocate land and water resources optimally to agriculture, and (c) allocate water to agriculture and hydropower production, and investigate trade-offs between them. First, a data assimilation procedure for data-scarce basins is proposed to deal with data limitations and produce estimates of the hydrologic components that are consistent with the principles of mass and energy conservation. Second, the most representative topography and soil properties datasets are objectively identified and used to delineate the agricultural potential in the basin. The agricultural potential is incorporated into a land-water allocation model that maximizes the net economic benefits from rain-fed agriculture while allowing for enhancing the soils from one suitability class to another to increase agricultural productivity in return for an investment in soil inputs. The optimal agricultural expansion is expected to reduce the basin flow by 7.6 cubic kilometres, impacting downstream countries. The optimization framework is expanded to include hydropower production. This study finds that allocating water to grow rain-fed teff in the basin is more profitable than allocating water for hydropower production. Optimal operation rules for the Grand Ethiopian Renaissance dam (GERD) are identified to maximize annual hydropower generation while achieving a relatively uniform monthly production rate. Trade-offs between agricultural expansion and hydropower

  6. Optimization of Composite Material System and Lay-up to Achieve Minimum Weight Pressure Vessel

    NASA Astrophysics Data System (ADS)

    Mian, Haris Hameed; Wang, Gang; Dar, Uzair Ahmed; Zhang, Weihong

    2013-10-01

    The use of composite pressure vessels particularly in the aerospace industry is escalating rapidly because of their superiority in directional strength and colossal weight advantage. The present work elucidates the procedure to optimize the lay-up for composite pressure vessel using finite element analysis and calculate the relative weight saving compared with the reference metallic pressure vessel. The determination of proper fiber orientation and laminate thickness is very important to decrease manufacturing difficulties and increase structural efficiency. In the present work different lay-up sequences for laminates including, cross-ply [ 0 m /90 n ] s , angle-ply [ ±θ] ns , [ 90/±θ] ns and [ 0/±θ] ns , are analyzed. The lay-up sequence, orientation and laminate thickness (number of layers) are optimized for three candidate composite materials S-glass/epoxy, Kevlar/epoxy and Carbon/epoxy. Finite element analysis of composite pressure vessel is performed by using commercial finite element code ANSYS and utilizing the capabilities of ANSYS Parametric Design Language and Design Optimization module to automate the process of optimization. For verification, a code is developed in MATLAB based on classical lamination theory; incorporating Tsai-Wu failure criterion for first-ply failure (FPF). The results of the MATLAB code shows its effectiveness in theoretical prediction of first-ply failure strengths of laminated composite pressure vessels and close agreement with the FEA results. The optimization results shows that for all the composite material systems considered, the angle-ply [ ±θ] ns is the optimum lay-up. For given fixed ply thickness the total thickness of laminate is obtained resulting in factor of safety slightly higher than two. Both Carbon/epoxy and Kevlar/Epoxy resulted in approximately same laminate thickness and considerable percentage of weight saving, but S-glass/epoxy resulted in weight increment.

  7. WFH: closing the global gap--achieving optimal care.

    PubMed

    Skinner, Mark W

    2012-07-01

    For 50 years, the World Federation of Hemophilia (WFH) has been working globally to close the gap in care and to achieve Treatment for All patients, men and women, with haemophilia and other inherited bleeding disorders, regardless of where they might live. The WFH estimates that more than one in 1000 men and women has a bleeding disorder equating to 6,900,000 worldwide. To close the gap in care between developed and developing nations a continued focus on the successful strategies deployed heretofore will be required. However, in response to the rapid advances in treatment and emerging therapeutic advances on the horizon it will also require fresh approaches and renewed strategic thinking. It is difficult to predict what each therapeutic advance on the horizon will mean for the future, but there is no doubt that we are in a golden age of research and development, which has the prospect of revolutionizing treatment once again. An improved understanding of "optimal" treatment is fundamental to the continued evolution of global care. The challenges of answering government and payer demands for evidence-based medicine, and cost justification for the introduction and enhancement of treatment, are ever-present and growing. To sustain and improve care it is critical to build the body of outcome data for individual patients, within haemophilia treatment centers (HTCs), nationally, regionally and globally. Emerging therapeutic advances (longer half-life therapies and gene transfer) should not be justified or brought to market based only on the notion that they will be economically more affordable, although that may be the case, but rather more importantly that they will be therapeutically more advantageous. Improvements in treatment adherence, reductions in bleeding frequency (including microhemorrhages), better management of trough levels, and improved health outcomes (including quality of life) should be the foremost considerations. As part of a new WFH strategic plan

  8. Optimism bias leads to inconclusive results-an empirical study.

    PubMed

    Djulbegovic, Benjamin; Kumar, Ambuj; Magazin, Anja; Schroen, Anneke T; Soares, Heloisa; Hozo, Iztok; Clarke, Mike; Sargent, Daniel; Schell, Michael J

    2011-06-01

    Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully and explored whether poor accrual or optimism bias is responsible for inconclusive results. Systematic review. Retrospective analysis of a consecutive-series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. Three hundred fifty-nine trials (374 comparisons) enrolling 150,232 patients were analyzed. Seventy percent (262 of 374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273 of 374) of studies. Investigators' judgments and statistical inferences were concordant in 75% (279 of 374) of trials. Investigators consistently overestimated their expected treatment effects but to a significantly larger extent for inconclusive trials. The median ratio of expected and observed hazard ratio or odds ratio was 1.34 (range: 0.19-15.40) in conclusive trials compared with 1.86 (range: 1.09-12.00) in inconclusive studies (P<0.0001). Only 17% of the trials had treatment effects that matched original researchers' expectations. Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Optimal and safe standard doses of midazolam and propofol to achieve patient and doctor satisfaction with dental treatment: A prospective cohort study

    PubMed Central

    Nonaka, Mutsumi; Nishimura, Akiko; Gotoh, Kinuko; Oka, Shuichirou; Iijima, Takehiko

    2017-01-01

    Background The incidences of morbidity and mortality caused by pharmacosedation for dental treatment have not yet reached zero. Adverse events are related to inappropriate respiratory management, mostly originating from an overdose of sedatives. Since sedation is utilized for the satisfaction of both the dentist and the patient, the optimal dose should be minimized to prevent adverse events. We attempted to define the optimal doses of midazolam and propofol required to achieve high levels of patient and dentist satisfaction. Methods One thousand dental patients, including those undergoing third molar extractions, were enrolled in this study. A dose of 1 mg of midazolam was administered at 1-minute intervals until adequate sedation was achieved. Propofol was then infused continuously to maintain the sedation level. Both the patients and the dentists were subsequently interviewed and asked to complete a questionnaire. A multivariate logistic regression analysis was used to examine the factors that contributed to patient and dentist satisfaction. Results The peak midazolam dose resulting in the highest percentage of patient satisfaction was 3 mg. Both a lower dose and a higher dose reduced patient satisfaction. Patient satisfaction increased with an increasing dosage of propofol up until 4 mg/kg/hr, reaching a peak of 78.6%. The peak midazolam dose resulting in the highest percentage of dentist satisfaction (78.8%) was 2 mg. Incremental propofol doses reduced dentist satisfaction, in contrast to their effect on patient satisfaction. The strongest independent predictors of patient satisfaction and dentist satisfaction were no intraoperative memory (OR, 5.073; 95% CI, 3.532–7.287; P<0.001) and unintentional movements by the patient (OR, 0.035; 95% CI, 0.012–0.104; P<0.001), respectively. No serious adverse events were reported. Conclusion We found that 3 mg of midazolam and 3 mg/kg/hr of propofol may be the optimal doses for maximizing both patient and dentist

  10. Notification: Review of Science to Achieve Results (STAR) Grant Program

    EPA Pesticide Factsheets

    Project #OA-FY12-0606, July 16, 2012. EPA’s Office of Inspector General (OIG) plans to begin preliminary research for an audit of grants awarded under EPA’s Science to Achieve Results (STAR) program.

  11. The expanded invasive weed optimization metaheuristic for solving continuous and discrete optimization problems.

    PubMed

    Josiński, Henryk; Kostrzewa, Daniel; Michalczuk, Agnieszka; Switoński, Adam

    2014-01-01

    This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature.

  12. Strategies to optimize lithium-ion supercapacitors achieving high-performance: Cathode configurations, lithium loadings on anode, and types of separator

    NASA Astrophysics Data System (ADS)

    Cao, Wanjun; Li, Yangxing; Fitch, Brian; Shih, Jonathan; Doung, Tien; Zheng, Jim

    2014-12-01

    The Li-ion capacitor (LIC) is composed of a lithium-doped carbon anode and an activated carbon cathode, which is a half Li-ion battery (LIB) and a half electrochemical double-layer capacitor (EDLC). LICs can achieve much more energy density than EDLC without sacrificing the high power performance advantage of capacitors over batteries. LIC pouch cells were assembled using activated carbon (AC) cathode and hard carbon (HC) + stabilized lithium metal power (SLMP®) anode. Different cathode configurations, various SLMP loadings on HC anode, and two types of separators were investigated to achieve the optimal electrochemical performance of the LIC. Firstly, the cathode binders study suggests that the PTFE binder offers improved energy and power performances for LIC in comparison to PVDF. Secondly, the mass ratio of SLMP to HC is at 1:7 to obtain the optimized electrochemical performance for LIC among all the various studied mass ratios between lithium loading amounts and active anode material. Finally, compared to the separator Celgard PP 3501, cellulose based TF40-30 is proven to be a preferred separator for LIC.

  13. Enablers and barriers for women with gestational diabetes mellitus to achieve optimal glycaemic control - a qualitative study using the theoretical domains framework.

    PubMed

    Martis, Ruth; Brown, Julie; McAra-Couper, Judith; Crowther, Caroline A

    2018-04-11

    Glycaemic target recommendations vary widely between international professional organisations for women with gestational diabetes mellitus (GDM). Some studies have reported women's experiences of having GDM, but little is known how this relates to their glycaemic targets. The aim of this study was to identify enablers and barriers for women with GDM to achieve optimal glycaemic control. Women with GDM were recruited from two large, geographically different, hospitals in New Zealand to participate in a semi-structured interview to explore their views and experiences focusing on enablers and barriers to achieving optimal glycaemic control. Final thematic analysis was performed using the Theoretical Domains Framework. Sixty women participated in the study. Women reported a shift from their initial negative response to accepting their diagnosis but disliked the constant focus on numbers. Enablers and barriers were categorised into ten domains across the three study questions. Enablers included: the ability to attend group teaching sessions with family and hear from women who have had GDM; easy access to a diabetes dietitian with diet recommendations tailored to a woman's context including ethnic food and financial considerations; free capillary blood glucose (CBG) monitoring equipment, health shuttles to take women to appointments; child care when attending clinic appointments; and being taught CBG testing by a community pharmacist. Barriers included: lack of health information, teaching sessions, consultations, and food diaries in a woman's first language; long waiting times at clinic appointments; seeing a different health professional every clinic visit; inconsistent advice; no tailored physical activities assessments; not knowing where to access appropriate information on the internet; unsupportive partners, families, and workplaces; and unavailability of social media or support groups for women with GDM. Perceived judgement by others led some women only to share

  14. Should Schools Be Optimistic? An Investigation of the Association between Academic Optimism of Schools and Student Achievement in Primary Education

    ERIC Educational Resources Information Center

    Boonen, Tinneke; Pinxten, Maarten; Van Damme, Jan; Onghena, Patrick

    2014-01-01

    Academic emphasis, collective efficacy, and faculty trust in students and parents (3 school characteristics positively associated with student achievement) are assumed to form a higher order latent construct, "academic optimism" (Hoy, Tarter, & Woolfolk Hoy, 2006a, 2006b). The aim of the present study is to corroborate the latent…

  15. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  16. Physiological geroscience: targeting function to increase healthspan and achieve optimal longevity

    PubMed Central

    Justice, Jamie N.; LaRocca, Thomas J.

    2015-01-01

    Abstract Most nations of the world are undergoing rapid and dramatic population ageing, which presents great socio‐economic challenges, as well as opportunities, for individuals, families, governments and societies. The prevailing biomedical strategy for reducing the healthcare impact of population ageing has been ‘compression of morbidity’ and, more recently, to increase healthspan, both of which seek to extend the healthy period of life and delay the development of chronic diseases and disability until a brief period at the end of life. Indeed, a recently established field within biological ageing research, ‘geroscience’, is focused on healthspan extension. Superimposed on this background are new attitudes and demand for ‘optimal longevity’ – living long, but with good health and quality of life. A key obstacle to achieving optimal longevity is the progressive decline in physiological function that occurs with ageing, which causes functional limitations (e.g. reduced mobility) and increases the risk of chronic diseases, disability and mortality. Current efforts to increase healthspan centre on slowing the fundamental biological processes of ageing such as inflammation/oxidative stress, increased senescence, mitochondrial dysfunction, impaired proteostasis and reduced stress resistance. We propose that optimization of physiological function throughout the lifespan should be a major emphasis of any contemporary biomedical policy addressing global ageing. Effective strategies should delay, reduce in magnitude or abolish reductions in function with ageing (primary prevention) and/or improve function or slow further declines in older adults with already impaired function (secondary prevention). Healthy lifestyle practices featuring regular physical activity and ideal energy intake/diet composition represent first‐line function‐preserving strategies, with pharmacological agents, including existing and new pharmaceuticals and novel

  17. Physiological geroscience: targeting function to increase healthspan and achieve optimal longevity.

    PubMed

    Seals, Douglas R; Justice, Jamie N; LaRocca, Thomas J

    2016-04-15

    Most nations of the world are undergoing rapid and dramatic population ageing, which presents great socio-economic challenges, as well as opportunities, for individuals, families, governments and societies. The prevailing biomedical strategy for reducing the healthcare impact of population ageing has been 'compression of morbidity' and, more recently, to increase healthspan, both of which seek to extend the healthy period of life and delay the development of chronic diseases and disability until a brief period at the end of life. Indeed, a recently established field within biological ageing research, 'geroscience', is focused on healthspan extension. Superimposed on this background are new attitudes and demand for 'optimal longevity' - living long, but with good health and quality of life. A key obstacle to achieving optimal longevity is the progressive decline in physiological function that occurs with ageing, which causes functional limitations (e.g. reduced mobility) and increases the risk of chronic diseases, disability and mortality. Current efforts to increase healthspan centre on slowing the fundamental biological processes of ageing such as inflammation/oxidative stress, increased senescence, mitochondrial dysfunction, impaired proteostasis and reduced stress resistance. We propose that optimization of physiological function throughout the lifespan should be a major emphasis of any contemporary biomedical policy addressing global ageing. Effective strategies should delay, reduce in magnitude or abolish reductions in function with ageing (primary prevention) and/or improve function or slow further declines in older adults with already impaired function (secondary prevention). Healthy lifestyle practices featuring regular physical activity and ideal energy intake/diet composition represent first-line function-preserving strategies, with pharmacological agents, including existing and new pharmaceuticals and novel 'nutraceutical' compounds, serving as potential

  18. Navy Strategy for Achieving Information Dominance, 2013-2017. Optimizing Navy’s Primacy in the Maritime and Information Domains

    DTIC Science & Technology

    2013-01-01

    and resources to optimize decision making and maximize warfighting effects, Navy Information Dominance has become a leading Service priority. In 2009...This Strategy for Achieving Information Dominance provides the framework through which the Navy s information capabilities will be mainstreamed into...the Navy s culture as a distinct warfighting discipline. The strategy focuses on the three fundamental Information Dominance capabilities of Assured

  19. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  20. Blast optimization for improved dragline productivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, M.; Baldwin, G.

    1994-12-31

    A project aimed at blast optimization for large open pit coal mines is utilizing blast monitoring and analysis techniques, advanced dragline monitoring equipment, and blast simulation software, to assess the major controlling factors affecting both blast performance and subsequent dragline productivity. This has involved collaborative work between the explosives supplier, mine operator, monitoring equipment manufacturer, and a mining research organization. The results from trial blasts and subsequently monitored dragline production have yielded promising results and continuing studies are being conducted as part of a blast optimization program. It should be stressed that the optimization of blasting practices for improved draglinemore » productivity is a site specific task, achieved through controlled and closely monitored procedures. The benefits achieved at one location can not be simply transferred to another minesite unless similar improvement strategies are first implemented.« less

  1. A free gingival impression for achieving optimal interdental papilla height: a case report.

    PubMed

    Nozawa, Takeshi; Kitami, Norikazu; Tsurumaki, Shunzo; Enomoto, Hiroaki; Ito, Koichi

    2011-02-01

    Failure to tend to inadequate crown contours in the crown trial can cause long-term disharmony of the free gingival form. This case report describes a novel technique for free gingival impression from a final provisional restoration to a zirconia crown. Two die casts were manufactured from a silicone impression. The first die cast was for the zirconia crown; the second die cast was for the final provisional restoration and the provisionalized transfer coping. A free gingival impression was taken using a provisionalized transfer coping, and a soft gingival model was manufactured. The proximal contact position was managed using the predicted convex curve of the interdental papillae. One year after zirconia crown placement, no inflammation was observed around the pyramidal interdental papillae, and symmetric interdental papilla heights were evident. A free gingival impression using a two die-cast technique appears to be useful for achieving optimal interdental papilla height.

  2. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  3. Need for optimizing catalyst loading for achieving affordable microbial fuel cells.

    PubMed

    Singh, Inderjeet; Chandra, Amreesh

    2013-08-01

    Microbial fuel cell (MFC) technology is a promising technology for electricity production together with simultaneous water treatment. Catalysts play an important role in deciding the MFC performance. In most reports, effect of catalyst - both type and quantity is not optimized. In this paper, synthesis of nanorods of MnO2-catalyst particles for application in Pt-free MFCs is reported. The effect of catalyst loading i.e., weight ratio, with respect to conducting element and binder has been optimized by employing large number of combinations. Using simple theoretical model, it is shown that too high (or low) concentration of catalysts result in loss of MFC performance. The operation of MFC has been investigated using domestic wastewater as source of bio-waste for obtaining real world situation. Maximum power density of ∼61 mW/m(2) was obtained when weight ratio of catalyst and conducting species was 1:1. Suitable reasons are given to explain the outcomes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Two Blades-Up Runs Using the JetStream Navitus Atherectomy Device Achieve Optimal Tissue Debulking of Nonocclusive In-Stent Restenosis: Observations From a Porcine Stent/Balloon Injury Model.

    PubMed

    Shammas, Nicolas W; Aasen, Nicole; Bailey, Lynn; Budrewicz, Jay; Farago, Trent; Jarvis, Gary

    2015-08-01

    To determine the number of runs with blades up (BU) using the JetStream Navitus to achieving optimal debulking in a porcine model of femoropopliteal artery in-stent restenosis (ISR). In this porcine model, 8 limbs were implanted with overlapping nitinol self-expanding stents. ISR was treated initially with 2 blades-down (BD) runs followed by 4 BU runs (BU1 to BU4). Quantitative vascular angiography (QVA) was performed at baseline, after 2 BD runs, and after each BU run. Plaque surface area and percent stenosis within the treated stented segment were measured. Intravascular ultrasound (IVUS) was used to measure minimum lumen area (MLA) and determine IVUS-derived plaque surface area. QVA showed that plaque surface area was significantly reduced between baseline (83.9%±14.8%) and 2 BD (67.7%±17.0%, p=0.005) and BU1 (55.4%±9.0%, p=0.005) runs, and between BU1 and BU2 runs (50.7%±9.7%, p<0.05). Percent stenosis behaved similarly with no further reduction after BU2. There were no further reductions in plaque surface area or percent stenosis with BU 3 and 4 runs (p=0.10). Similarly, IVUS (24 lesions) confirmed optimal results with BU2 runs and no additional gain in MLA or reduction in plaque surface area with BU3 and 4. IVUS confirmed no orbital cutting with JetStream Navitus. There were no stent strut discontinuities on high-resolution radiographs following atherectomy. JetStream Navitus achieved optimal tissue debulking after 2 BD and 2 BU runs with no further statistical gain in debulking after the BU2 run. Operators treating ISR with JetStream Navitus may be advised to limit their debulking to 2 BD and 2 BU runs to achieve optimal debulking. © The Author(s) 2015.

  5. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  6. Reduction of exposure to acrylamide: achievements, potential of optimization, and problems encountered from the perspectives of a Swiss enforcement laboratory.

    PubMed

    Grob, Koni

    2005-01-01

    The most important initiatives taken in Switzerland to reduce exposure of consumers to acrylamide are the separate sale of potatoes low in reducing sugars for roasting and frying, the optimization of the raw material and preparation of french fries, and campaigns to implement suitable preparation methods in the gastronomy and homes. Industry works on improving a range of other products. Although these measures can reduce high exposures by some 80%, they have little effect on the background exposure resulting from coffee, bread, and numerous other products for which no substantial improvement is in sight. At this stage, improvements should be achieved by supporting voluntary activity rather than legal limits. Committed and consistent risk communication is key, and the support of improvements presupposes innovative approaches.

  7. Optimizing Aesthetic Outcomes in Delayed Breast Reconstruction

    PubMed Central

    2017-01-01

    Background: The need to restore both the missing breast volume and breast surface area makes achieving excellent aesthetic outcomes in delayed breast reconstruction especially challenging. Autologous breast reconstruction can be used to achieve both goals. The aim of this study was to identify surgical maneuvers that can optimize aesthetic outcomes in delayed breast reconstruction. Methods: This is a retrospective review of operative and clinical records of all patients who underwent unilateral or bilateral delayed breast reconstruction with autologous tissue between April 2014 and January 2017. Three groups of delayed breast reconstruction patients were identified based on patient characteristics. Results: A total of 26 flaps were successfully performed in 17 patients. Key surgical maneuvers for achieving aesthetically optimal results were identified. A statistically significant difference for volume requirements was identified in cases where a delayed breast reconstruction and a contralateral immediate breast reconstruction were performed simultaneously. Conclusions: Optimal aesthetic results can be achieved with: (1) restoration of breast skin envelope with tissue expansion when possible, (2) optimal positioning of a small skin paddle to be later incorporated entirely into a nipple areola reconstruction when adequate breast skin surface area is present, (3) limiting the reconstructed breast mound to 2 skin tones when large area skin resurfacing is required, (4) increasing breast volume by deepithelializing, not discarding, the inferior mastectomy flap skin, (5) eccentric division of abdominal flaps when an immediate and delayed bilateral breast reconstructions are performed simultaneously; and (6) performing second-stage breast reconstruction revisions and fat grafting. PMID:28894666

  8. Use of response surface methodology in a fed-batch process for optimization of tricarboxylic acid cycle intermediates to achieve high levels of canthaxanthin from Dietzia natronolimnaea HS-1.

    PubMed

    Nasri Nasrabadi, Mohammad Reza; Razavi, Seyed Hadi

    2010-04-01

    In this work, we applied statistical experimental design to a fed-batch process for optimization of tricarboxylic acid cycle (TCA) intermediates in order to achieve high-level production of canthaxanthin from Dietzia natronolimnaea HS-1 cultured in beet molasses. A fractional factorial design (screening test) was first conducted on five TCA cycle intermediates. Out of the five TCA cycle intermediates investigated via screening tests, alfaketoglutarate, oxaloacetate and succinate were selected based on their statistically significant (P<0.05) and positive effects on canthaxanthin production. These significant factors were optimized by means of response surface methodology (RSM) in order to achieve high-level production of canthaxanthin. The experimental results of the RSM were fitted with a second-order polynomial equation by means of a multiple regression technique to identify the relationship between canthaxanthin production and the three TCA cycle intermediates. By means of this statistical design under a fed-batch process, the optimum conditions required to achieve the highest level of canthaxanthin (13172 + or - 25 microg l(-1)) were determined as follows: alfaketoglutarate, 9.69 mM; oxaloacetate, 8.68 mM; succinate, 8.51 mM. Copyright 2009 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  9. Mixed-Strategy Chance Constrained Optimal Control

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.

    2013-01-01

    This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.

  10. Teachers' Autonomy Support, Autonomy Suppression and Conditional Negative Regard as Predictors of Optimal Learning Experience among High-Achieving Bedouin Students

    ERIC Educational Resources Information Center

    Kaplan, Haya

    2018-01-01

    The study is based on self-determination theory and focuses on the motivation of high-achieving Bedouin students who belong to a hierarchical-collectivist society. The study focuses on the question: What are the relations between teachers' autonomy support and control and an optimal learning experience among students? The study is unique in its…

  11. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  12. Achieving Conservation when Opportunity Costs Are High: Optimizing Reserve Design in Alberta's Oil Sands Region

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Farr, Dan; Adamowicz, W. L.; Boutin, Stan

    2011-01-01

    Recent studies have shown that conservation gains can be achieved when the spatial distributions of biological benefits and economic costs are incorporated in the conservation planning process. Using Alberta, Canada, as a case study we apply these techniques in the context of coarse-filter reserve design. Because targets for ecosystem representation and other coarse-filter design elements are difficult to define objectively we use a trade-off analysis to systematically explore the relationship between conservation targets and economic opportunity costs. We use the Marxan conservation planning software to generate reserve designs at each level of conservation target to ensure that our quantification of conservation and economic outcomes represents the optimal allocation of resources in each case. Opportunity cost is most affected by the ecological representation target and this relationship is nonlinear. Although petroleum resources are present throughout most of Alberta, and include highly valuable oil sands deposits, our analysis indicates that over 30% of public lands could be protected while maintaining access to more than 97% of the value of the region's resources. Our case study demonstrates that optimal resource allocation can be usefully employed to support strategic decision making in the context of land-use planning, even when conservation targets are not well defined. PMID:21858046

  13. Comparison of evolutionary algorithms for LPDA antenna optimization

    NASA Astrophysics Data System (ADS)

    Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.

    2016-08-01

    A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.

  14. Can three-dimensional patient-specific cutting guides be used to achieve optimal correction for high tibial osteotomy? Pilot study.

    PubMed

    Munier, M; Donnez, M; Ollivier, M; Flecher, X; Chabrand, P; Argenson, J-N; Parratte, S

    2017-04-01

    the HKA and 0.96 [0.79-0.99] for the tibial slope. There were no surgical site infections; one patient had a postoperative hematoma that resolved spontaneously. The results of this study showed that use of PSCGs in HTO procedures helps to achieve optimal correction in a safe and reliable manner. IV - Prospective cohort study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  15. Optimization of a Tube Hydroforming Process

    NASA Astrophysics Data System (ADS)

    Abedrabbo, Nader; Zafar, Naeem; Averill, Ron; Pourboghrat, Farhang; Sidhu, Ranny

    2004-06-01

    An approach is presented to optimize a tube hydroforming process using a Genetic Algorithm (GA) search method. The goal of the study is to maximize formability by identifying the optimal internal hydraulic pressure and feed rate while satisfying the forming limit diagram (FLD). The optimization software HEEDS is used in combination with the nonlinear structural finite element code LS-DYNA to carry out the investigation. In particular, a sub-region of a circular tube blank is formed into a square die. Compared to the best results of a manual optimization procedure, a 55% increase in expansion was achieved when using the pressure and feed profiles identified by the automated optimization procedure.

  16. Pervious concrete mix optimization for sustainable pavement solution

    NASA Astrophysics Data System (ADS)

    Barišić, Ivana; Galić, Mario; Netinger Grubeša, Ivanka

    2017-10-01

    In order to fulfill requirements of sustainable road construction, new materials for pavement construction are investigated with the main goal to preserve natural resources and achieve energy savings. One of such sustainable pavement material is pervious concrete as a new solution for low volume pavements. To accommodate required strength and porosity as the measure of appropriate drainage capability, four mixtures of pervious concrete are investigated and results of laboratory tests of compressive and flexural strength and porosity are presented. For defining the optimal pervious concrete mixture in a view of aggregate and financial savings, optimization model is utilized and optimal mixtures defined according to required strength and porosity characteristics. Results of laboratory research showed that comparing single-sized aggregate pervious concrete mixtures, coarse aggregate mixture result in increased porosity but reduced strengths. The optimal share of the coarse aggregate turn to be 40.21%, the share of fine aggregate is 49.79% for achieving required compressive strength of 25 MPa, flexural strength of 4.31 MPa and porosity of 21.66%.

  17. Achieving Optimal Privacy in Trust-Aware Social Recommender Systems

    NASA Astrophysics Data System (ADS)

    Dokoohaki, Nima; Kaleli, Cihan; Polat, Huseyin; Matskin, Mihhail

    Collaborative filtering (CF) recommenders are subject to numerous shortcomings such as centralized processing, vulnerability to shilling attacks, and most important of all privacy. To overcome these obstacles, researchers proposed for utilization of interpersonal trust between users, to alleviate many of these crucial shortcomings. Till now, attention has been mainly paid to strong points about trust-aware recommenders such as alleviating profile sparsity or calculation cost efficiency, while least attention has been paid on investigating the notion of privacy surrounding the disclosure of individual ratings and most importantly protection of trust computation across social networks forming the backbone of these systems. To contribute to addressing problem of privacy in trust-aware recommenders, within this paper, first we introduce a framework for enabling privacy-preserving trust-aware recommendation generation. While trust mechanism aims at elevating recommender's accuracy, to preserve privacy, accuracy of the system needs to be decreased. Since within this context, privacy and accuracy are conflicting goals we show that a Pareto set can be found as an optimal setting for both privacy-preserving and trust-enabling mechanisms. We show that this Pareto set, when used as the configuration for measuring the accuracy of base collaborative filtering engine, yields an optimized tradeoff between conflicting goals of privacy and accuracy. We prove this concept along with applicability of our framework by experimenting with accuracy and privacy factors, and we show through experiment how such optimal set can be inferred.

  18. Optimal Flow.

    ERIC Educational Resources Information Center

    Norman, Donald A.

    1996-01-01

    Discusses the educational applications of experimental psychologist Mihaly Csikszentmihalyi's theory of peak experience, or optimal flow. Optimal flow refers to the receptive state people achieve when they are engaged in interesting and challenging activity. Includes an insightful critique of multimedia instruction from this perspective. (MJP)

  19. Initial results of the use of prescription order change forms to achieve dose form optimization (consolidation and tablet splitting) of SSRI antidepressants in a state Medicaid program.

    PubMed

    Hamer, Ann M; Hartung, Daniel M; Haxby, Dean G; Ketchum, Kathy L; Pollack, David A

    2006-01-01

    One method to reduce drug costs is to promote dose form optimization strategies that take advantage of the flat pricing of some drugs, i.e., the same or nearly the same price for a 100 mg tablet and a 50 mg tablet of the same drug. Dose form optimization includes tablet splitting; taking half of a higher-strength tablet; and dose form consolidation, using 1 higher-strength tablet instead of 2 lower-strength tablets. Dose form optimization can reduce the direct cost of therapy by up to 50% while continuing the same daily dose of the same drug molecule. To determine if voluntary prescription change forms for antidepressant drugs could induce dosing changes and reduce the cost of antidepressant therapy in a Medicaid population. Specific regimens of 4 selective serotonin reuptake inhibitors (SSRIs)- citalopram, escitalopram, paroxetine, and sertraline- were identified for conversion to half tablets or dose optimization. Change forms, which served as valid prescriptions, were faxed to Oregon prescribers in October 2004. The results from both the returned forms and subsequent drug claims data were evaluated using a segmented linear regression. Citalopram claims were excluded from the cost analysis because the drug became available in generic form in October 2004. A total of 1,582 change forms were sent to 556 unique prescribers; 9.2% of the change forms were for dose consolidation and 90.8% were for tablet splitting. Of the 1,118 change forms (70.7%) that were returned, 956 (60.4% of those sent and 85.5% of those returned) authorized a prescription change to a lower-cost dose regimen. The average drug cost per day declined by 14.2%, from Dollars 2.26 to Dollars 1.94 in the intervention group, versus a 1.6% increase, from Dollars 2.52 to Dollars 2.56, in the group without dose consolidation or tablet splitting of the 3 SSRIs (sertraline, escitalopram, and immediate-release paroxetine). Total drug cost for the 3 SSRIs declined by 35.6%, from Dollars 333,567 to Dollars 214

  20. Appraising Reading Achievement.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    To determine quality sequence in pupil progress, evaluation approaches need to be used which guide the teacher to assist learners to attain optimally. Teachers must use a variety of procedures to appraise student achievement in reading, because no one approach is adequate. Appraisal approaches might include: (1) observation and subsequent…

  1. Primary Mental Abilities and Metropolitan Readiness Tests as Predictors of Achievement in the First Primary Year.

    ERIC Educational Resources Information Center

    University City School District, MO.

    The prediction of achievement provides teachers with necessary information to help children attain optimal achievement. If some skill prerequistites to learning which are not fully developed can be identified and strengthened, higher levels of achievement may result. The Metropolitan Readiness Tests (MRT) are routinely given to all University City…

  2. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  3. Multidisciplinary optimization for engineering systems - Achievements and potential

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.

  4. Multidisciplinary optimization for engineering systems: Achievements and potential

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.

  5. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.

  6. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  7. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without

  8. Improving scanner wafer alignment performance by target optimization

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Jehoul, Christiane; Socha, Robert; Menchtchikov, Boris; Raghunathan, Sudhar; Kent, Eric; Schoonewelle, Hielke; Tinnemans, Patrick; Tuffy, Paul; Belen, Jun; Wise, Rich

    2016-03-01

    In the process nodes of 10nm and below, the patterning complexity along with the processing and materials required has resulted in a need to optimize alignment targets in order to achieve the required precision, accuracy and throughput performance. Recent industry publications on the metrology target optimization process have shown a move from the expensive and time consuming empirical methodologies, towards a faster computational approach. ASML's Design for Control (D4C) application, which is currently used to optimize YieldStar diffraction based overlay (DBO) metrology targets, has been extended to support the optimization of scanner wafer alignment targets. This allows the necessary process information and design methodology, used for DBO target designs, to be leveraged for the optimization of alignment targets. In this paper, we show how we applied this computational approach to wafer alignment target design. We verify the correlation between predictions and measurements for the key alignment performance metrics and finally show the potential alignment and overlay performance improvements that an optimized alignment target could achieve.

  9. Optimal correction and design parameter search by modern methods of rigorous global optimization

    NASA Astrophysics Data System (ADS)

    Makino, K.; Berz, M.

    2011-07-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  10. Optimal trajectories for an aerospace plane. Part 1: Formulation, results, and analysis

    NASA Technical Reports Server (NTRS)

    Miele, Angelo; Lee, W. Y.; Wu, G. D.

    1990-01-01

    The optimization of the trajectories of an aerospace plane is discussed. This is a hypervelocity vehicle capable of achieving orbital speed, while taking off horizontally. The vehicle is propelled by four types of engines: turbojet engines for flight at subsonic speeds/low supersonic speeds; ramjet engines for flight at moderate supersonic speeds/low hypersonic speeds; scramjet engines for flight at hypersonic speeds; and rocket engines for flight at near-orbital speeds. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied under the following assumptions: the turbojet portion of the trajectory has been completed; the aerospace plane is controlled via the angle of attack and the power setting; the aerodynamic model is the generic hypersonic aerodynamics model example (GHAME). Concerning the engine model, three options are considered: (EM1), a ramjet/scramjet combination in which the scramjet specific impulse tends to a nearly-constant value at large Mach numbers; (EM2), a ramjet/scramjet combination in which the scramjet specific impulse decreases monotonically at large Mach numbers; and (EM3), a ramjet/scramjet/rocket combination in which, owing to stagnation temperature limitations, the scramjet operates only at M approx. less than 15; at higher Mach numbers, the scramjet is shut off and the aerospace plane is driven only by the rocket engines. Under the above assumptions, four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (P1) minimization of the weight of fuel consumed; (P2) minimization of the peak dynamic pressure; (P3) minimization of the peak heating rate; and (P4) minimization of the peak tangential acceleration.

  11. Direct handling of equality constraints in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Renaud, John E.; Gabriele, Gary A.

    1990-01-01

    In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.

  12. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  13. A study of optical design and optimization of laser optics

    NASA Astrophysics Data System (ADS)

    Tsai, C.-M.; Fang, Yi-Chin

    2013-09-01

    This paper propose a study of optical design of laser beam shaping optics with aspheric surface and application of genetic algorithm (GA) to find the optimal results. Nd: YAG 355 waveband laser flat-top optical system, this study employed the Light tools LDS (least damped square) and the GA of artificial intelligence optimization method to determine the optimal aspheric coefficient and obtain the optimal solution. This study applied the aspheric lens with GA for the flattening of laser beams using collimated laser beam light, aspheric lenses in order to achieve best results.

  14. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    PubMed

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  15. Incorporating C60 as Nucleation Sites Optimizing PbI2 Films To Achieve Perovskite Solar Cells Showing Excellent Efficiency and Stability via Vapor-Assisted Deposition Method.

    PubMed

    Chen, Hai-Bin; Ding, Xi-Hong; Pan, Xu; Hayat, Tasawar; Alsaedi, Ahmed; Ding, Yong; Dai, Song-Yuan

    2018-01-24

    To achieve high-quality perovskite solar cells (PSCs), the morphology and carrier transportation of perovskite films need to be optimized. Herein, C 60 is employed as nucleation sites in PbI 2 precursor solution to optimize the morphology of perovskite films via vapor-assisted deposition process. Accompanying the homogeneous nucleation of PbI 2 , the incorporation of C 60 as heterogeneous nucleation sites can lower the nucleation free energy of PbI 2 , which facilitates the diffusion and reaction between PbI 2 and organic source. Meanwhile, C 60 could enhance carrier transportation and reduce charge recombination in the perovskite layer due to its high electron mobility and conductivity. In addition, the grain sizes of perovskite get larger with C 60 optimizing, which can reduce the grain boundaries and voids in perovskite and prevent the corrosion because of moisture. As a result, we obtain PSCs with a power conversion efficiency (PCE) of 18.33% and excellent stability. The PCEs of unsealed devices drop less than 10% in a dehumidification cabinet after 100 days and remain at 75% of the initial PCE during exposure to ambient air (humidity > 60% RH, temperature > 30 °C) for 30 days.

  16. Dispositional Optimism and Perceived Risk Interact to Predict Intentions to Learn Genome Sequencing Results

    PubMed Central

    Taber, Jennifer M.; Klein, William M. P.; Ferrer, Rebecca A.; Lewis, Katie L.; Biesecker, Leslie G.; Biesecker, Barbara B.

    2015-01-01

    Objective Dispositional optimism and risk perceptions are each associated with health-related behaviors and decisions and other outcomes, but little research has examined how these constructs interact, particularly in consequential health contexts. The predictive validity of risk perceptions for health-related information seeking and intentions may be improved by examining dispositional optimism as a moderator, and by testing alternate types of risk perceptions, such as comparative and experiential risk. Method Participants (n = 496) had their genomes sequenced as part of a National Institutes of Health pilot cohort study (ClinSeq®). Participants completed a cross-sectional baseline survey of various types of risk perceptions and intentions to learn genome sequencing results for differing disease risks (e.g., medically actionable, nonmedically actionable, carrier status) and to use this information to change their lifestyle/health behaviors. Results Risk perceptions (absolute, comparative, and experiential) were largely unassociated with intentions to learn sequencing results. Dispositional optimism and comparative risk perceptions interacted, however, such that individuals higher in optimism reported greater intentions to learn all 3 types of sequencing results when comparative risk was perceived to be higher than when it was perceived to be lower. This interaction was inconsistent for experiential risk and absent for absolute risk. Independent of perceived risk, participants high in dispositional optimism reported greater interest in learning risks for nonmedically actionable disease and carrier status, and greater intentions to use genome information to change their lifestyle/health behaviors. Conclusions The relationship between risk perceptions and intentions may depend on how risk perceptions are assessed and on degree of optimism. PMID:25313897

  17. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    USDA-ARS?s Scientific Manuscript database

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  18. Analysis and optimization of hybrid electric vehicle thermal management systems

    NASA Astrophysics Data System (ADS)

    Hamut, H. S.; Dincer, I.; Naterer, G. F.

    2014-02-01

    In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.

  19. Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process

    NASA Astrophysics Data System (ADS)

    Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.

    2015-08-01

    An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.

  20. Assessment Results and Student Achievement; a Correlation Study Regarding Ability Grouping

    ERIC Educational Resources Information Center

    Slonaker, Richard V.

    2013-01-01

    School leaders face increased pressure to identify instructional and administrative practices that increase student achievement. However, achievement gaps persist between disadvantaged and non-disadvantaged student groups. This study highlighted relationships between ability grouping and academic achievement in a suburban school district.…

  1. Application of Particle Swarm Optimization Algorithm for Optimizing ANN Model in Recognizing Ripeness of Citrus

    NASA Astrophysics Data System (ADS)

    Diyana Rosli, Anis; Adenan, Nur Sabrina; Hashim, Hadzli; Ezan Abdullah, Noor; Sulaiman, Suhaimi; Baharudin, Rohaiza

    2018-03-01

    This paper shows findings of the application of Particle Swarm Optimization (PSO) algorithm in optimizing an Artificial Neural Network that could categorize between ripeness and unripeness stage of citrus suhuensis. The algorithm would adjust the network connections weights and adapt its values during training for best results at the output. Initially, citrus suhuensis fruit’s skin is measured using optically non-destructive method via spectrometer. The spectrometer would transmit VIS (visible spectrum) photonic light radiation to the surface (skin of citrus) of the sample. The reflected light from the sample’s surface would be received and measured by the same spectrometer in terms of reflectance percentage based on VIS range. These measured data are used to train and test the best optimized ANN model. The accuracy is based on receiver operating characteristic (ROC) performance. The result outcomes from this investigation have shown that the achieved accuracy for the optimized is 70.5% with a sensitivity and specificity of 60.1% and 80.0% respectively.

  2. Examining Cultural Capital and Student Achievement: Results of a Meta-Analytic Review

    ERIC Educational Resources Information Center

    Tan, Cheng Yong

    2017-01-01

    This meta-analysis summarized the relationships between cultural capital and student achievement (155 effect sizes involving 685,393 K-12 students) published in education journals between 1981 and 2015. Results showed a small-to-medium overall mean effect size, and larger individual effect sizes for parental education and parental expectations…

  3. Strategy to Achieve Highly Porous/Biocompatible Macroscale Cell Blocks, Using a Collagen/Genipin-bioink and an Optimal 3D Printing Process.

    PubMed

    Kim, Yong Bok; Lee, Hyeongjin; Kim, Geun Hyung

    2016-11-30

    Recently, a three-dimensional (3D) bioprinting process for obtaining a cell-laden structure has been widely applied because of its ability to fabricate biomimetic complex structures embedded with and without cells. To successfully obtain a cell-laden porous block, the cell-delivering vehicle, bioink, is one of the significant factors. Until now, various biocompatible hydrogels (synthetic and natural biopolymers) have been utilized in the cell-printing process, but a bioink satisfying both biocompatibility and print-ability requirements to achieve a porous structure with reasonable mechanical strength has not been issued. Here, we propose a printing strategy with optimal conditions including a safe cross-linking procedure for obtaining a 3D porous cell block composed of a biocompatible collagen-bioink and genipin, a cross-linking agent. To obtain the optimal processing conditions, we modified the 3D printing machine and selected an optimal cross-linking condition (∼1 mM and 1 h) of genipin solution. To show the feasibility of the process, 3D pore-interconnected cell-laden constructs were manufactured using osteoblast-like cells (MG63) and human adipose stem cells (hASCs). Under these processing conditions, a macroscale 3D collagen-based cell block of 21 × 21 × 12 mm 3 and over 95% cell viability was obtained. In vitro biological testing of the cell-laden 3D porous structure showed that the embedded cells were sufficiently viable, and their proliferation was significantly higher; the cells also exhibited increased osteogenic activities compared to the conventional alginate-based bioink (control). The results indicated the fabrication process using the collagen-bioink would be an innovative platform to design highly biocompatible and mechanically stable cell blocks.

  4. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files

    PubMed Central

    Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng

    2018-01-01

    Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754

  5. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Application of Boiler Op for combustion optimization at PEPCO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maines, P.; Williams, S.; Levy, E.

    1997-09-01

    Title IV requires the reduction of NOx at all stations within the PEPCO system. To assist PEPCO plant personnel in achieving low heat rates while meeting NOx targets, Lehigh University`s Energy Research Center and PEPCO developed a new combustion optimization software package called Boiler Op. The Boiler Op code contains an expert system, neural networks and an optimization algorithm. The expert system guides the plant engineer through a series of parametric boiler tests, required for the development of a comprehensive boiler database. The data are then analyzed by the neural networks and optimization algorithm to provide results on the boilermore » control settings which result in the best possible heat rate at a target NOx level or produce minimum NOx. Boiler Op has been used at both Potomac River and Morgantown Stations to help PEPCO engineers optimize combustion. With the use of Boiler Op, Morgantown Station operates under low NOx restrictions and continues to achieve record heat rate values, similar to pre-retrofit conditions. Potomac River Station achieves the regulatory NOx limit through the use of Boiler Op recommended control settings and without NOx burners. Importantly, any software like Boiler Op cannot be used alone. Its application must be in concert with human intelligence to ensure unit safety, reliability and accurate data collection.« less

  7. Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-Test Results

    NASA Technical Reports Server (NTRS)

    Brown, Nelson Andrew; Schaefer, Jacob Robert

    2013-01-01

    A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.

  8. Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-test Results

    NASA Technical Reports Server (NTRS)

    Brown, Nelson Andrew; Schaefer, Jacob Robert

    2013-01-01

    A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.

  9. Optimizing density patterns to achieve desired light extraction for displays

    NASA Astrophysics Data System (ADS)

    Davenport, T. L. R.; Cassarly, W. J.

    2007-01-01

    In displays such as backlights and signage, it is often desirable to produce a particular spatial luminance distribution of light. This work demonstrates an iterative optimization technique for determining the density of light extractors required to produce desired luminance distributions.

  10. Achieving Optimal Quantum Acceleration of Frequency Estimation Using Adaptive Coherent Control.

    PubMed

    Naghiloo, M; Jordan, A N; Murch, K W

    2017-11-03

    Precision measurements of frequency are critical to accurate time keeping and are fundamentally limited by quantum measurement uncertainties. While for time-independent quantum Hamiltonians the uncertainty of any parameter scales at best as 1/T, where T is the duration of the experiment, recent theoretical works have predicted that explicitly time-dependent Hamiltonians can yield a 1/T^{2} scaling of the uncertainty for an oscillation frequency. This quantum acceleration in precision requires coherent control, which is generally adaptive. We experimentally realize this quantum improvement in frequency sensitivity with superconducting circuits, using a single transmon qubit. With optimal control pulses, the theoretically ideal frequency precision scaling is reached for times shorter than the decoherence time. This result demonstrates a fundamental quantum advantage for frequency estimation.

  11. Higher Education Counts: Achieving Results. 2007 Executive Summary

    ERIC Educational Resources Information Center

    Connecticut Department of Higher Education (NJ1), 2007

    2007-01-01

    "Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with…

  12. Higher Education Counts: Achieving Results, 2008. Executive Summary

    ERIC Educational Resources Information Center

    Connecticut Department of Higher Education (NJ1), 2008

    2008-01-01

    "Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with…

  13. Higher Education Counts: Achieving Results. 2009 Executive Summary

    ERIC Educational Resources Information Center

    Connecticut Department of Higher Education (NJ1), 2009

    2009-01-01

    "Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with…

  14. Dispositional optimism and perceived risk interact to predict intentions to learn genome sequencing results.

    PubMed

    Taber, Jennifer M; Klein, William M P; Ferrer, Rebecca A; Lewis, Katie L; Biesecker, Leslie G; Biesecker, Barbara B

    2015-07-01

    Dispositional optimism and risk perceptions are each associated with health-related behaviors and decisions and other outcomes, but little research has examined how these constructs interact, particularly in consequential health contexts. The predictive validity of risk perceptions for health-related information seeking and intentions may be improved by examining dispositional optimism as a moderator, and by testing alternate types of risk perceptions, such as comparative and experiential risk. Participants (n = 496) had their genomes sequenced as part of a National Institutes of Health pilot cohort study (ClinSeq®). Participants completed a cross-sectional baseline survey of various types of risk perceptions and intentions to learn genome sequencing results for differing disease risks (e.g., medically actionable, nonmedically actionable, carrier status) and to use this information to change their lifestyle/health behaviors. Risk perceptions (absolute, comparative, and experiential) were largely unassociated with intentions to learn sequencing results. Dispositional optimism and comparative risk perceptions interacted, however, such that individuals higher in optimism reported greater intentions to learn all 3 types of sequencing results when comparative risk was perceived to be higher than when it was perceived to be lower. This interaction was inconsistent for experiential risk and absent for absolute risk. Independent of perceived risk, participants high in dispositional optimism reported greater interest in learning risks for nonmedically actionable disease and carrier status, and greater intentions to use genome information to change their lifestyle/health behaviors. The relationship between risk perceptions and intentions may depend on how risk perceptions are assessed and on degree of optimism. (c) 2015 APA, all rights reserved.

  15. Achieving minimum-error discrimination of an arbitrary set of laser-light pulses

    NASA Astrophysics Data System (ADS)

    da Silva, Marcus P.; Guha, Saikat; Dutton, Zachary

    2013-05-01

    Laser light is widely used for communication and sensing applications, so the optimal discrimination of coherent states—the quantum states of light emitted by an ideal laser—has immense practical importance. Due to fundamental limits imposed by quantum mechanics, such discrimination has a finite minimum probability of error. While concrete optical circuits for the optimal discrimination between two coherent states are well known, the generalization to larger sets of coherent states has been challenging. In this paper, we show how to achieve optimal discrimination of any set of coherent states using a resource-efficient quantum computer. Our construction leverages a recent result on discriminating multicopy quantum hypotheses [Blume-Kohout, Croke, and Zwolak, arXiv:1201.6625]. As illustrative examples, we analyze the performance of discriminating a ternary alphabet and show how the quantum circuit of a receiver designed to discriminate a binary alphabet can be reused in discriminating multimode hypotheses. Finally, we show that our result can be used to achieve the quantum limit on the rate of classical information transmission on a lossy optical channel, which is known to exceed the Shannon rate of all conventional optical receivers.

  16. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  17. Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results

    NASA Astrophysics Data System (ADS)

    Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef

    2017-04-01

    The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.

  18. Full-Day Kindergarten Results in Significant Achievement Gains

    ERIC Educational Resources Information Center

    Raskin, Candace F.; Haar, Jean M.

    2009-01-01

    In 2004, after an in-depth review of student achievement data for over 4,000 students, the administration of a school district in southern Minnesota identified the following challenges: (1) above-state-average number of special education students; (2) increasing number of English as Second Language (ESL) students; (3) increasing number of students…

  19. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  20. Results achieved by emergency physicians in teaching basic cardiopulmonary resuscitation to secondary school students.

    PubMed

    Jiménez-Fábrega, Xavier; Escalada-Roig, Xavier; Sánchez, Miquel; Culla, Alexandre; Díaz, Núria; Gómez, Xavier; Villena, Olga; Rodríguez, Esther; Gaspar, Alberto; Molina, José Emilio; Salvador, Jordi; Miró, Oscar

    2009-06-01

    We investigated the results obtained with a basic cardiopulmonary resuscitation (b-CPR) program (PROCES) specifically designed for secondary school students (14-16 years old) and taught by emergency physicians. We used a multiple-choice test with 20 questions (10 on theory and 10 on skills) answered before and immediately after and 1 year after receiving the b-CPR course. Satisfactory learning was considered when at least 8 out of 10 skill questions were correctly answered. We investigated student variables associated with better immediate and deferred (1 year after) PROCES performance. We compared the results with those obtained using a more standardized program to teach b-CPR to police cadets. We enrolled 600 high school students. PROCES achieved significant improvement in overall, theory and skill marks immediately after the course (P<0.001), with a significant decay in all of them 1 year after the course (P<0.001). Satisfactory learning was achieved by 57% of school students immediately after PROCES and by 37% when assessed 1 year later. Students without pending study subjects (P=0.001) and those from private schools (P<0.01) achieved significantly better performance immediately after PROCES and only female students achieved greater performance 1 year after the course (P<0.05). With respect to police cadets instructed through a standardized course, immediate satisfactory learning of school students was lower (79 vs. 57%, respectively; P<0.001), whereas deferred satisfactory learning was higher (23 vs. 37%, respectively; P<0.05). Emergency physicians can satisfactorily instruct secondary school students in b-CPR using PROCES, and this specific program achieves a reasonable amount of satisfactory learning.

  1. Aircraft optimization by a system approach: Achievements and trends

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.

  2. Exemplar pediatric collaborative improvement networks: achieving results.

    PubMed

    Billett, Amy L; Colletti, Richard B; Mandel, Keith E; Miller, Marlene; Muething, Stephen E; Sharek, Paul J; Lannon, Carole M

    2013-06-01

    A number of pediatric collaborative improvement networks have demonstrated improved care and outcomes for children. Regionally, Cincinnati Children's Hospital Medical Center Physician Hospital Organization has sustained key asthma processes, substantially increased the percentage of their asthma population receiving "perfect care," and implemented an innovative pay-for-performance program with a large commercial payor based on asthma performance measures. The California Perinatal Quality Care Collaborative uses its outcomes database to improve care for infants in California NICUs. It has achieved reductions in central line-associated blood stream infections (CLABSI), increased breast-milk feeding rates at hospital discharge, and is now working to improve delivery room management. Solutions for Patient Safety (SPS) has achieved significant improvements in adverse drug events and surgical site infections across all 8 Ohio children's hospitals, with 7700 fewer children harmed and >$11.8 million in avoided costs. SPS is now expanding nationally, aiming to eliminate all events of serious harm at children's hospitals. National collaborative networks include ImproveCareNow, which aims to improve care and outcomes for children with inflammatory bowel disease. Reliable adherence to Model Care Guidelines has produced improved remission rates without using new medications and a significant increase in the proportion of Crohn disease patients not taking prednisone. Data-driven collaboratives of the Children's Hospital Association Quality Transformation Network initially focused on CLABSI in PICUs. By September 2011, they had prevented an estimated 2964 CLABSI, saving 355 lives and $103,722,423. Subsequent improvement efforts include CLABSI reductions in additional settings and populations.

  3. Measurement configuration optimization for dynamic metrology using Stokes polarimetry

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Zhang, Chuanwei; Zhong, Zhicheng; Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Liu, Shiyuan

    2018-05-01

    As dynamic loading experiments such as a shock compression test are usually characterized by short duration, unrepeatability and high costs, high temporal resolution and precise accuracy of the measurements is required. Due to high temporal resolution up to a ten-nanosecond-scale, a Stokes polarimeter with six parallel channels has been developed to capture such instantaneous changes in optical properties in this paper. Since the measurement accuracy heavily depends on the configuration of the probing beam incident angle and the polarizer azimuth angle, it is important to select an optimal combination from the numerous options. In this paper, a systematic error propagation-based measurement configuration optimization method corresponding to the Stokes polarimeter was proposed. The maximal Frobenius norm of the combinatorial matrix of the configuration error propagating matrix and the intrinsic error propagating matrix is introduced to assess the measurement accuracy. The optimal configuration for thickness measurement of a SiO2 thin film deposited on a Si substrate has been achieved by minimizing the merit function. Simulation and experimental results show a good agreement between the optimal measurement configuration achieved experimentally using the polarimeter and the theoretical prediction. In particular, the experimental result shows that the relative error in the thickness measurement can be reduced from 6% to 1% by using the optimal polarizer azimuth angle when the incident angle is 45°. Furthermore, the optimal configuration for the dynamic metrology of a nickel foil under quasi-dynamic loading is investigated using the proposed optimization method.

  4. An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.

    PubMed

    Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin

    2016-12-01

    Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.

  5. Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, M. Y.

    1986-01-01

    This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.

  6. Achievable rate maximization for decode-and-forward MIMO-OFDM networks with an energy harvesting relay.

    PubMed

    Du, Guanyao; Yu, Jianjun

    2016-01-01

    This paper investigates the system achievable rate for the multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system with an energy harvesting (EH) relay. Firstly we propose two protocols, time switching-based decode-and-forward relaying (TSDFR) and a flexible power splitting-based DF relaying (PSDFR) protocol by considering two practical receiver architectures, to enable the simultaneous information processing and energy harvesting at the relay. In PSDFR protocol, we introduce a temporal parameter to describe the time division pattern between the two phases which makes the protocol more flexible and general. In order to explore the system performance limit, we discuss the system achievable rate theoretically and formulate two optimization problems for the proposed protocols to maximize the system achievable rate. Since the problems are non-convex and difficult to solve, we first analyze them theoretically and get some explicit results, then design an augmented Lagrangian penalty function (ALPF) based algorithm for them. Numerical results are provided to validate the accuracy of our analytical results and the effectiveness of the proposed ALPF algorithm. It is shown that, PSDFR outperforms TSDFR to achieve higher achievable rate in such a MIMO-OFDM relaying system. Besides, we also investigate the impacts of the relay location, the number of antennas and the number of subcarriers on the system performance. Specifically, it is shown that, the relay position greatly affects the system performance of both protocols, and relatively worse achievable rate is achieved when the relay is placed in the middle of the source and the destination. This is different from the MIMO-OFDM DF relaying system without EH. Moreover, the optimal factor which indicates the time division pattern between the two phases in the PSDFR protocol is always above 0.8, which means that, the common division of the total transmission time into two equal phases in

  7. Optimization of Perovskite Gas Sensor Performance: Characterization, Measurement and Experimental Design.

    PubMed

    Bertocci, Francesco; Fort, Ada; Vignoli, Valerio; Mugnaini, Marco; Berni, Rossella

    2017-06-10

    Eight different types of nanostructured perovskites based on YCoO 3 with different chemical compositions are prepared as gas sensor materials, and they are studied with two target gases NO 2 and CO. Moreover, a statistical approach is adopted to optimize their performance. The innovative contribution is carried out through a split-plot design planning and modeling, also involving random effects, for studying Metal Oxide Semiconductors (MOX) sensors in a robust design context. The statistical results prove the validity of the proposed approach; in fact, for each material type, the variation of the electrical resistance achieves a satisfactory optimized value conditional to the working temperature and by controlling for the gas concentration variability. Just to mention some results, the sensing material YCo 0 . 9 Pd 0 . 1 O 3 (Mt1) achieved excellent solutions during the optimization procedure. In particular, Mt1 resulted in being useful and feasible for the detection of both gases, with optimal response equal to +10.23% and working temperature at 312 ∘ C for CO (284 ppm, from design) and response equal to -14.17% at 185 ∘ C for NO 2 (16 ppm, from design). Analogously, for NO 2 (16 ppm, from design), the material type YCo 0 . 9 O 2 . 85 + 1 % Pd (Mt8) allows for optimizing the response value at - 15 . 39 % with a working temperature at 181 . 0 ∘ C, whereas for YCo 0 . 95 Pd 0 . 05 O 3 (Mt3), the best response value is achieved at - 15 . 40 % with the temperature equal to 204 ∘ C.

  8. Optimization of an electrokinetic mixer for microfluidic applications.

    PubMed

    Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P J

    2012-06-01

    This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly.

  9. Optimization of an electrokinetic mixer for microfluidic applications

    PubMed Central

    Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P. J.

    2012-01-01

    This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly. PMID:22712034

  10. Integrating epidemiology, psychology, and economics to achieve HPV vaccination targets.

    PubMed

    Basu, Sanjay; Chapman, Gretchen B; Galvani, Alison P

    2008-12-02

    Human papillomavirus (HPV) vaccines provide an opportunity to reduce the incidence of cervical cancer. Optimization of cervical cancer prevention programs requires anticipation of the degree to which the public will adhere to vaccination recommendations. To compare vaccination levels driven by public perceptions with levels that are optimal for maximizing the community's overall utility, we develop an epidemiological game-theoretic model of HPV vaccination. The model is parameterized with survey data on actual perceptions regarding cervical cancer, genital warts, and HPV vaccination collected from parents of vaccine-eligible children in the United States. The results suggest that perceptions of survey respondents generate vaccination levels far lower than those that maximize overall health-related utility for the population. Vaccination goals may be achieved by addressing concerns about vaccine risk, particularly those related to sexual activity among adolescent vaccine recipients. In addition, cost subsidizations and shifts in federal coverage plans may compensate for perceived and real costs of HPV vaccination to achieve public health vaccination targets.

  11. Practical synchronization on complex dynamical networks via optimal pinning control

    NASA Astrophysics Data System (ADS)

    Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu

    2015-07-01

    We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.

  12. Vitamin D in corticosteroid-naïve and corticosteroid-treated Duchenne muscular dystrophy: what dose achieves optimal 25(OH) vitamin D levels?

    PubMed

    Alshaikh, Nahla; Brunklaus, Andreas; Davis, Tracey; Robb, Stephanie A; Quinlivan, Ros; Munot, Pinki; Sarkozy, Anna; Muntoni, Francesco; Manzur, Adnan Y

    2016-10-01

    Assessment of the efficacy of vitamin D replenishment and maintenance doses required to attain optimal levels in boys with Duchenne muscular dystrophy (DMD). 25(OH)-vitamin D levels and concurrent vitamin D dosage were collected from retrospective case-note review of boys with DMD at the Dubowitz Neuromuscular Centre. Vitamin D levels were stratified as deficient at <25 nmol/L, insufficient at 25-49 nmol/L, adequate at 50-75 nmol/L and optimal at >75 nmol/L. 617 vitamin D samples were available from 197 boys (range 2-18 years)-69% from individuals on corticosteroids. Vitamin D-naïve boys (154 samples) showed deficiency in 28%, insufficiency in 42%, adequate levels in 24% and optimal levels in 6%. The vitamin D-supplemented group (463 samples) was tested while on different maintenance/replenishment doses. Three-month replenishment of daily 3000 IU (23 samples) or 6000 IU (37 samples) achieved optimal levels in 52% and 84%, respectively. 182 samples taken on 400 IU revealed deficiency in 19 (10%), insufficiency in 84 (47%), adequate levels in 67 (37%) and optimal levels in 11 (6%). 97 samples taken on 800 IU showed deficiency in 2 (2%), insufficiency in 17 (17%), adequate levels in 56 (58%) and optimal levels in 22 (23%). 81 samples were on 1000 IU and 14 samples on 1500 IU, with optimal levels in 35 (43%) and 9 (64%), respectively. No toxic level was seen (highest level 230 nmol/L). The prevalence of vitamin D deficiency and insufficiency in DMD is high. A 2-month replenishment regimen of 6000 IU and maintenance regimen of 1000-1500 IU/day was associated with optimal vitamin D levels. These data have important implications for optimising vitamin D dosing in DMD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Delta-doping optimization for high quality p-type GaN

    NASA Astrophysics Data System (ADS)

    Bayram, C.; Pau, J. L.; McClintock, R.; Razeghi, M.

    2008-10-01

    Delta (δ -) doping is studied in order to achieve high quality p-type GaN. Atomic force microscopy, x-ray diffraction, photoluminescence, and Hall measurements are performed on the samples to optimize the δ-doping characteristics. The effect of annealing on the electrical, optical, and structural quality is also investigated for different δ-doping parameters. Optimized pulsing conditions result in layers with hole concentrations near 1018 cm-3 and superior crystal quality compared to conventional p-GaN. This material improvement is achieved thanks to the reduction in the Mg activation energy and self-compensation effects in δ-doped p-GaN.

  14. Application of ant colony optimization to optimal foragaing theory: comparison of simulation and field results

    USDA-ARS?s Scientific Manuscript database

    Ant Colony Optimization (ACO) refers to the family of algorithms inspired by the behavior of real ants and used to solve combinatorial problems such as the Traveling Salesman Problem (TSP).Optimal Foraging Theory (OFT) is an evolutionary principle wherein foraging organisms or insect parasites seek ...

  15. Evolutionary Bi-objective Optimization for Bulldozer and Its Blade in Soil Cutting

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak; Barakat, Nada

    2018-02-01

    An evolutionary optimization approach is adopted in this paper for simultaneously achieving the economic and productive soil cutting. The economic aspect is defined by minimizing the power requirement from the bulldozer, and the soil cutting is made productive by minimizing the time of soil cutting. For determining the power requirement, two force models are adopted from the literature to quantify the cutting force on the blade. Three domain-specific constraints are also proposed, which are limiting the power from the bulldozer, limiting the maximum force on the bulldozer blade and achieving the desired production rate. The bi-objective optimization problem is solved using five benchmark multi-objective evolutionary algorithms and one classical optimization technique using the ɛ-constraint method. The Pareto-optimal solutions are obtained with the knee-region. Further, the post-optimal analysis is performed on the obtained solutions to decipher relationships among the objectives and decision variables. Such relationships are later used for making guidelines for selecting the optimal set of input parameters. The obtained results are then compared with the experiment results from the literature that show a close agreement among them.

  16. Next Generation Scientists, Next Opportunities: EPA's Science To Achieve Results (STAR) Program

    NASA Astrophysics Data System (ADS)

    Jones, M.

    2004-12-01

    Scientific research is one of the most powerful tools we have for understanding and protecting our environment. It provides the foundation for what we know about our planet, how it has changed, and how it could be altered in the future. The National Center for Environmental Research (NCER) in the U.S. Environmental Protection Agency's (EPA) Office of Research and Development (ORD) supports high-quality, extramural research by the nation's leading scientists and engineers to strengthen the basis for decisions about local and national environmental issues. NCER works with academia, state and local governments, other federal agencies, and scientists in EPA to increase human knowledge of how to protect our health and natural resources through its three major programs: · Science to Achieve Results (STAR) Grants · Small Business Innovative Research (SBIR) · Science to Achieve Results (STAR) Fellowships STAR, NCER's primary program, funds research grants and graduate fellowships in environmental science and engineering. Developing the next generation of environmental scientists and engineers is one of NCER's most important objectives. Each year, NCER helps between 80 and 160 students achieve Master's or Ph.D. degrees in environmental science and engineering through its STAR and Greater Research Opportunities (GRO) fellowships. Some of these students have moved on to careers in government while others are now full-time professors and researchers. Still others are working for state environmental agencies or furthering their studies through postdoctoral positions at universities. Since the inception of the NCER program, STAR fellowships (along with grants and SBIR projects) have been awarded in every state in the country. With the help of STAR, current and future scientists and engineers have been able to explore ways to preserve and protect human health and our precious resources.

  17. Optimization of lightweight structure and supporting bipod flexure for a space mirror.

    PubMed

    Chen, Yi-Cheng; Huang, Bo-Kai; You, Zhen-Ting; Chan, Chia-Yen; Huang, Ting-Ming

    2016-12-20

    This article presents an optimization process for integrated optomechanical design. The proposed optimization process for integrated optomechanical design comprises computer-aided drafting, finite element analysis (FEA), optomechanical transfer codes, and an optimization solver. The FEA was conducted to determine mirror surface deformation; then, deformed surface nodal data were transferred into Zernike polynomials through MATLAB optomechanical transfer codes to calculate the resulting optical path difference (OPD) and optical aberrations. To achieve an optimum design, the optimization iterations of the FEA, optomechanical transfer codes, and optimization solver were automatically connected through a self-developed Tcl script. Two examples of optimization design were illustrated in this research, namely, an optimum lightweight design of a Zerodur primary mirror with an outer diameter of 566 mm that is used in a spaceborne telescope and an optimum bipod flexure design that supports the optimum lightweight primary mirror. Finally, optimum designs were successfully accomplished in both examples, achieving a minimum peak-to-valley (PV) value for the OPD of the deformed optical surface. The simulated optimization results showed that (1) the lightweight ratio of the primary mirror increased from 56% to 66%; and (2) the PV value of the mirror supported by optimum bipod flexures in the horizontal position effectively decreased from 228 to 61 nm.

  18. Testing for Results: Helping Families, Schools and Communities Understand and Improve Student Achievement.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Office of the Under Secretary.

    Redefining the federal government's role in kindergarten through grade 12 education, the No Child Left Behind Act of 2001 is designed to close the achievement gap between disadvantaged and minority students and their peers. The act is based on four principles, the first of which is stronger accountability for results, entailing creation of…

  19. A Goal Programming Optimization Model for The Allocation of Liquid Steel Production

    NASA Astrophysics Data System (ADS)

    Hapsari, S. N.; Rosyidi, C. N.

    2018-03-01

    This research was conducted in one of the largest steel companies in Indonesia which has several production units and produces a wide range of steel products. One of the important products in the company is billet steel. The company has four Electric Arc Furnace (EAF) which produces liquid steel which must be procesed further to be billet steel. The billet steel plant needs to make their production process more efficient to increase the productvity. The management has four goals to be achieved and hence the optimal allocation of the liquid steel production is needed to achieve those goals. In this paper, a goal programming optimization model is developed to determine optimal allocation of liquid steel production in each EAF, to satisfy demand in 3 periods and the company goals, namely maximizing the volume of production, minimizing the cost of raw materials, minimizing maintenance costs, maximizing sales revenues, and maximizing production capacity. From the results of optimization, only maximizing production capacity goal can not achieve the target. However, the model developed in this papare can optimally allocate liquid steel so the allocation of production does not exceed the maximum capacity of the machine work hours and maximum production capacity.

  20. Dose-mass inverse optimization for minimally moving thoracic lesions

    NASA Astrophysics Data System (ADS)

    Mihaylov, I. B.; Moros, E. G.

    2015-05-01

    In the past decade, several different radiotherapy treatment plan evaluation and optimization schemes have been proposed as viable approaches, aiming for dose escalation or an increase of healthy tissue sparing. In particular, it has been argued that dose-mass plan evaluation and treatment plan optimization might be viable alternatives to the standard of care, which is realized through dose-volume evaluation and optimization. The purpose of this investigation is to apply dose-mass optimization to a cohort of lung cancer patients and compare the achievable healthy tissue sparing to that one achievable through dose-volume optimization. Fourteen non-small cell lung cancer (NSCLC) patient plans were studied retrospectively. The range of tumor motion was less than 0.5 cm and motion management in the treatment planning process was not considered. For each case, dose-volume (DV)-based and dose-mass (DM)-based optimization was performed. Nine-field step-and-shoot IMRT was used, with all of the optimization parameters kept the same between DV and DM optimizations. Commonly used dosimetric indices (DIs) such as dose to 1% the spinal cord volume, dose to 50% of the esophageal volume, and doses to 20 and 30% of healthy lung volumes were used for cross-comparison. Similarly, mass-based indices (MIs), such as doses to 20 and 30% of healthy lung masses, 1% of spinal cord mass, and 33% of heart mass, were also tallied. Statistical equivalence tests were performed to quantify the findings for the entire patient cohort. Both DV and DM plans for each case were normalized such that 95% of the planning target volume received the prescribed dose. DM optimization resulted in more organs at risk (OAR) sparing than DV optimization. The average sparing of cord, heart, and esophagus was 23, 4, and 6%, respectively. For the majority of the DIs, DM optimization resulted in lower lung doses. On average, the doses to 20 and 30% of healthy lung were lower by approximately 3 and 4%, whereas lung

  1. Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio; hide

    2016-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  2. Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.

    2014-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  3. Optimizing the Usability of Brain-Computer Interfaces.

    PubMed

    Zhang, Yin; Chase, Steve M

    2018-05-01

    Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.

  4. Optimal Control Modification for Time-Scale Separated Systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2012-01-01

    Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.

  5. Optimization of Perovskite Gas Sensor Performance: Characterization, Measurement and Experimental Design

    PubMed Central

    Bertocci, Francesco; Fort, Ada; Vignoli, Valerio; Mugnaini, Marco; Berni, Rossella

    2017-01-01

    Eight different types of nanostructured perovskites based on YCoO3 with different chemical compositions are prepared as gas sensor materials, and they are studied with two target gases NO2 and CO. Moreover, a statistical approach is adopted to optimize their performance. The innovative contribution is carried out through a split-plot design planning and modeling, also involving random effects, for studying Metal Oxide Semiconductors (MOX) sensors in a robust design context. The statistical results prove the validity of the proposed approach; in fact, for each material type, the variation of the electrical resistance achieves a satisfactory optimized value conditional to the working temperature and by controlling for the gas concentration variability. Just to mention some results, the sensing material YCo0.9Pd0.1O3 (Mt1) achieved excellent solutions during the optimization procedure. In particular, Mt1 resulted in being useful and feasible for the detection of both gases, with optimal response equal to +10.23% and working temperature at 312∘C for CO (284 ppm, from design) and response equal to −14.17% at 185∘C for NO2 (16 ppm, from design). Analogously, for NO2 (16 ppm, from design), the material type YCo0.9O2.85+1%Pd (Mt8) allows for optimizing the response value at −15.39% with a working temperature at 181.0∘C, whereas for YCo0.95Pd0.05O3 (Mt3), the best response value is achieved at −15.40% with the temperature equal to 204∘C. PMID:28604587

  6. Treatment of chronic myeloid leukemia: assessing risk, monitoring response, and optimizing outcome.

    PubMed

    Shanmuganathan, Naranie; Hiwase, Devendra Keshaorao; Ross, David Morrall

    2017-12-01

    Over the past two decades, tyrosine kinase inhibitors have become the foundation of chronic myeloid leukemia (CML) treatment. The choice between imatinib and newer tyrosine kinase inhibitors (TKIs) needs to be balanced against the known toxicity and efficacy data for each drug, the therapeutic goal being to maximize molecular response assessed by BCR-ABL RQ-PCR assay. There is accumulating evidence that the early achievement of molecular targets is a strong predictor of superior long-term outcomes. Early response assessment provides the opportunity to intervene early with the aim of ensuring an optimal response. Failure to achieve milestones or loss of response can have diverse causes. We describe how clinical and laboratory monitoring can be used to ensure that each patient is achieving an optimal response and, in patients who do not reach optimal response milestones, how the monitoring results can be used to detect resistance and understand its origins.

  7. Optimizing the wireless power transfer over MIMO Channels

    NASA Astrophysics Data System (ADS)

    Wiedmann, Karsten; Weber, Tobias

    2017-09-01

    In this paper, the optimization of the power transfer over wireless channels having multiple-inputs and multiple-outputs (MIMO) is studied. Therefore, the transmitter, the receiver and the MIMO channel are modeled as multiports. The power transfer efficiency is described by a Rayleigh quotient, which is a function of the channel's scattering parameters and the incident waves from both transmitter and receiver side. This way, the power transfer efficiency can be maximized analytically by solving a generalized eigenvalue problem, which is deduced from the Rayleigh quotient. As a result, the maximum power transfer efficiency achievable over a given MIMO channel is obtained. This maximum can be used as a performance bound in order to benchmark wireless power transfer systems. Furthermore, the optimal operating point which achieves this maximum will be obtained. The optimal operating point will be described by the complex amplitudes of the optimal incident and reflected waves of the MIMO channel. This supports the design of the optimal transmitter and receiver multiports. The proposed method applies for arbitrary MIMO channels, taking transmitter-side and/or receiver-side cross-couplings in both near- and farfield scenarios into consideration. Special cases are briefly discussed in this paper in order to illustrate the method.

  8. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  9. Chaos minimization in DC-DC boost converter using circuit parameter optimization

    NASA Astrophysics Data System (ADS)

    Sudhakar, N.; Natarajan, Rajasekar; Gourav, Kumar; Padmavathi, P.

    2017-11-01

    DC-DC converters are prone to several types of nonlinear phenomena including bifurcation, quasi periodicity, intermittency and chaos. These undesirable effects must be controlled for periodic operation of the converter to ensure the stability. In this paper an effective solution to control of chaos in solar fed DC-DC boost converter is proposed. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The obtained results are compared with the operation of traditional boost converter. Further the obtained results with BFA optimized parameter ensures the operations of the converter are within the controllable region. To elaborate the study of bifurcation analysis with optimized and unoptimized parameters are also presented.

  10. Supply-Chain Optimization Template

    NASA Technical Reports Server (NTRS)

    Quiett, William F.; Sealing, Scott L.

    2009-01-01

    The Supply-Chain Optimization Template (SCOT) is an instructional guide for identifying, evaluating, and optimizing (including re-engineering) aerospace- oriented supply chains. The SCOT was derived from the Supply Chain Council s Supply-Chain Operations Reference (SCC SCOR) Model, which is more generic and more oriented toward achieving a competitive advantage in business.

  11. Increase of Gas-Turbine Plant Efficiency by Optimizing Operation of Compressors

    NASA Astrophysics Data System (ADS)

    Matveev, V.; Goriachkin, E.; Volkov, A.

    2018-01-01

    The article presents optimization method for improving of the working process of axial compressors of gas turbine engines. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.

  12. The Effects of CSCOPE on Student Achievement as Measured by Both TAKS and STAAR Test Results

    ERIC Educational Resources Information Center

    Helm, Maricela Robledo

    2013-01-01

    The purpose of this study was to examine the effects of CSCOPE curriculum on student achievement. CSCOPE is a curriculum management system used in 750 of the 1,039 school districts in the state of Texas. Student achievement is based on the results acquired from the Texas Assessment of Knowledge and Skills (TAKS) and the new version of the state…

  13. Study of optimal extraction conditions for achieving high yield and antioxidant activity of tomato seed oil

    USDA-ARS?s Scientific Manuscript database

    Tomato seeds resulting from tomato processing by-product have not been effectively utilized as value-added products. This study investigated the kinetics of oil extraction from tomato seeds and sought to optimize the oil extraction conditions. The oil was extracted by using hexane as solvent for 0 t...

  14. Multiobjective evolutionary optimization of water distribution systems: Exploiting diversity with infeasible solutions.

    PubMed

    Tanyimboh, Tiku T; Seyoum, Alemtsehay G

    2016-12-01

    This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Real-time trajectory optimization on parallel processors

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1993-01-01

    A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.

  16. Assisting Pupils in Mathematics Achievement (The Common Core Standards)

    ERIC Educational Resources Information Center

    Ediger, Marlow

    2011-01-01

    Mathematics teachers must expect reasonably high standards of achievement from pupils. Too frequently, pupils attain at a substandard level and more optimal achievement is necessary. Thus, pupils should have self esteem needs met in the school and classroom setting. Thus, learners feel that mathematics is worthwhile and effort must be put forth to…

  17. Optimization of hydraulic turbine governor parameters based on WPA

    NASA Astrophysics Data System (ADS)

    Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao

    2018-01-01

    The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.

  18. Optimizing the multicycle subrotational internal cooling of diatomic molecules

    NASA Astrophysics Data System (ADS)

    Aroch, A.; Kallush, S.; Kosloff, R.

    2018-05-01

    Subrotational cooling of the AlH+ ion to the miliKelvin regime, using optimally shaped pulses, is computed. The coherent electromagnetic fields induce purity-conserved transformations and do not change the sample temperature. A decrease in a sample temperature, manifested by an increase of purity, is achieved by the complementary uncontrolled spontaneous emission which changes the entropy of the system. We employ optimal control theory to find a pulse that stirs the system into a population configuration that will result in cooling, upon multicycle excitation-emission steps. The obtained optimal transformation was shown capable to cool molecular ions to the subkelvins regime.

  19. Memetic Algorithm-Based Multi-Objective Coverage Optimization for Wireless Sensor Networks

    PubMed Central

    Chen, Zhi; Li, Shuai; Yue, Wenjing

    2014-01-01

    Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms. PMID:25360579

  20. Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks.

    PubMed

    Chen, Zhi; Li, Shuai; Yue, Wenjing

    2014-10-30

    Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms.

  1. On optimization of energy harvesting from base-excited vibration

    NASA Astrophysics Data System (ADS)

    Tai, Wei-Che; Zuo, Lei

    2017-12-01

    This paper re-examines and clarifies the long-believed optimization conditions of electromagnetic and piezoelectric energy harvesting from base-excited vibration. In terms of electromagnetic energy harvesting, it is typically believed that the maximum power is achieved when the excitation frequency and electrical damping equal the natural frequency and mechanical damping of the mechanical system respectively. We will show that this optimization condition is only valid when the acceleration amplitude of base excitation is constant and an approximation for small mechanical damping when the excitation displacement amplitude is constant. To this end, a two-variable optimization analysis, involving the normalized excitation frequency and electrical damping ratio, is performed to derive the exact optimization condition of each case. When the excitation displacement amplitude is constant, we analytically show that, in contrast to the long-believed optimization condition, the optimal excitation frequency and electrical damping are always larger than the natural frequency and mechanical damping ratio respectively. In particular, when the mechanical damping ratio exceeds a critical value, the optimization condition is no longer valid. Instead, the average power generally increases as the excitation frequency and electrical damping ratio increase. Furthermore, the optimization analysis is extended to consider parasitic electrical losses, which also shows different results when compared with existing literature. When the excitation acceleration amplitude is constant, on the other hand, the exact optimization condition is identical to the long-believed one. In terms of piezoelectric energy harvesting, it is commonly believed that the optimal power efficiency is achieved when the excitation and the short or open circuit frequency of the harvester are equal. Via a similar two-variable optimization analysis, we analytically show that the optimal excitation frequency depends on the

  2. First-Order Frameworks for Managing Models in Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natlia M.; Lewis, Robert Michael

    2000-01-01

    Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.

  3. Strategies for Fermentation Medium Optimization: An In-Depth Review

    PubMed Central

    Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.

    2017-01-01

    Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566

  4. Optimization of spent fuel pool weir gate driving mechanism

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Du, Lin; Tao, Xinlei; Wang, Shijie; Shang, Ertao; Yu, Jianjiang

    2018-04-01

    Spent fuel pool is crucial facility for fuel storage and nuclear safety, and the spent fuel pool weir gate is the key related equipment. In order to achieve a goal of more efficient driving force transfer, loading during the opening/closing process is analyzed and an optimized calculation method for dimensions of driving mechanism is proposed. The result of optimizing example shows that the method can be applied to weir gates' design with similar driving mechanism.

  5. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  6. The Value of Full Correction: Achieving Excellent and Affordable Results.

    PubMed

    Kaplan, Julie Bass

    2016-01-01

    Patients often come to medical aesthetic offices with hopes to fully correct lost facial volume and achieve a natural appearance. Unfortunately, the cost per syringe of dermal filler can be a barrier to desired outcomes. Many aesthetic practitioners do the best they can with the amount of product the patient can afford, often falling short of the "wow" effect for the patient. This article describes what one office implemented to solve the conundrum of affordability while still allowing offices to cover its own financial realities. This tool can help patients achieve beautiful, natural, and affordable outcomes while helping offices advance in manufacturer's tiers, improve word-of-mouth advertising, and increase job satisfaction.

  7. Achieving Consistent Near-Optimal Pattern Recognition Accuracy Using Particle Swarm Optimization to Pre-Train Artificial Neural Networks

    ERIC Educational Resources Information Center

    Nikelshpur, Dmitry O.

    2014-01-01

    Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…

  8. Extracting remaining information from an inconclusive result in optimal unambiguous state discrimination

    NASA Astrophysics Data System (ADS)

    Zhang, Gang; Yu, Long-Bao; Zhang, Wen-Hai; Cao, Zhuo-Liang

    2014-12-01

    In unambiguous state discrimination, the measurement results consist of the error-free results and an inconclusive result, and an inconclusive result is conventionally regarded as a useless remainder from which no information about initial states is extracted. In this paper, we investigate the problem of extracting remaining information from an inconclusive result, provided that the optimal total success probability is determined. We present three simple examples. An inconclusive answer in the first two examples can be extracted partial information, while an inconclusive answer in the third one cannot be. The initial states in the third example are defined as the highly symmetric states.

  9. Spatio-temporal optimization of agricultural practices to achieve a sustainable development at basin level; framework of a case study in Colombia

    NASA Astrophysics Data System (ADS)

    Uribe, Natalia; corzo, Gerald; Solomatine, Dimitri

    2016-04-01

    The flood events present during the last years in different basins of the Colombian territory have raised questions on the sensitivity of the regions and if this regions have common features. From previous studies it seems important features in the sensitivity of the flood process were: land cover change, precipitation anomalies and these related to impacts of agriculture management and water management deficiencies, among others. A significant government investment in the outreach activities for adopting and promoting the Colombia National Action Plan on Climate Change (NAPCC) is being carried out in different sectors and regions, having as a priority the agriculture sector. However, more information is still needed in the local environment in order to assess were the regions have this sensitivity. Also the continuous change in one region with seasonal agricultural practices have been pointed out as a critical information for optimal sustainable development. This combined spatio-temporal dynamics of crops cycle in relation to climate change (or variations) has an important impact on flooding events at basin areas. This research will develop on the assessment and optimization of the aggregated impact of flood events due to determinate the spatio-temporal dynamic of changes in agricultural management practices. A number of common best agricultural practices have been identified to explore their effect in a spatial hydrological model that will evaluate overall changes. The optimization process consists on the evaluation of best performance in the agricultural production, without having to change crops activities or move to other regions. To achieve this objectives a deep analysis of different models combined with current and future climate scenarios have been planned. An algorithm have been formulated to cover the parametric updates such that the optimal temporal identification will be evaluated in different region on the case study area. Different hydroinformatics

  10. Affectionless control by the same-sex parents increases dysfunctional attitudes about achievement.

    PubMed

    Otani, Koichi; Suzuki, Akihito; Matsumoto, Yoshihiko; Sadahiro, Ryoichi; Enokido, Masanori

    2014-08-01

    The affectionless control parenting has been associated with depression in recipients. The aim of this study was to examine the effect of this parenting style on dysfunctional attitudes predisposing to depression. The subjects were 666 Japanese volunteers. Perceived parental rearing was evaluated by the Parental Bonding Instrument, which has the care and protection subscales. Parental rearing was classified into four types, i.e., optimal parenting (high care/low protection), affectionate constraint (high care/high protection), neglectful parenting (low care/low protection), and affectionless control (low care/high protection). Dysfunctional attitudes were evaluated by the 24-item Dysfunctional Attitude Scale, which has the achievement, dependency and self-control subscales. Males with paternal affectionless control had higher achievement scores than those with paternal optimal parenting (P=.016). Similarly, females with maternal affectionless control had higher achievement scores than those with maternal optimal parenting (P=.016). The present study suggests that affectionless control by the same-sex parents increases dysfunctional attitudes about achievement. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Optimal Energy Consumption Analysis of Natural Gas Pipeline

    PubMed Central

    Liu, Enbin; Li, Changjun; Yang, Yi

    2014-01-01

    There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410

  12. Regular Deployment of Wireless Sensors to Achieve Connectivity and Information Coverage

    PubMed Central

    Cheng, Wei; Li, Yong; Jiang, Yi; Yin, Xipeng

    2016-01-01

    Coverage and connectivity are two of the most critical research subjects in WSNs, while regular deterministic deployment is an important deployment strategy and results in some pattern-based lattice WSNs. Some studies of optimal regular deployment for generic values of rc/rs were shown recently. However, most of these deployments are subject to a disk sensing model, and cannot take advantage of data fusion. Meanwhile some other studies adapt detection techniques and data fusion to sensing coverage to enhance the deployment scheme. In this paper, we provide some results on optimal regular deployment patterns to achieve information coverage and connectivity as a variety of rc/rs, which are all based on data fusion by sensor collaboration, and propose a novel data fusion strategy for deployment patterns. At first the relation between variety of rc/rs and density of sensors needed to achieve information coverage and connectivity is derived in closed form for regular pattern-based lattice WSNs. Then a dual triangular pattern deployment based on our novel data fusion strategy is proposed, which can utilize collaborative data fusion more efficiently. The strip-based deployment is also extended to a new pattern to achieve information coverage and connectivity, and its characteristics are deduced in closed form. Some discussions and simulations are given to show the efficiency of all deployment patterns, including previous patterns and the proposed patterns, to help developers make more impactful WSN deployment decisions. PMID:27529246

  13. Design of optimized piezoelectric HDD-sliders

    NASA Astrophysics Data System (ADS)

    Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.

    2010-04-01

    As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.

  14. Results of a workplace health campaign: what can be achieved?

    PubMed

    Leyk, Dieter; Rohde, Ulrich; Hartmann, Nadine D; Preuß, Philipp A; Sievert, Alexander; Witzki, Alexander

    2014-05-02

    Effective health promotion in the workplace is now essential because of the rising health-related costs for businesses, the increasing pressure arising from international competition, prolonged working lives, and the aging of the work force. The basic problem of prevention campaigns is that the target groups are too rarely reached and sustainable benefits too rarely achieved. In 2011, we carried out a broad-based health and fitness campaign to assess how many personnel could be motivated to participate in a model study under nearly ideal conditions. 1010 personnel were given the opportunity to participate in various kinds of sports, undergo sports-medicine examinations, attend monthly expert lectures, and benefit from nutritional offerings and Intranet information during work hours. Pseudonymized questionnaires were used to classify the participants according to their exercise behavior as non-active, not very active, and very active. The participants' subjective responses (regarding, e.g., health, exercise, nutrition, and the factors that motivated them to participate in sports or discouraged them from doing so) were recorded, as were their objective data (measures of body size and strength). The duration of the study was one year. 490 of the 1010 personnel (48.5%, among whom 27.2% were nonactive, 44.1% not very active, and 28.7% very active) participated in the initial questionnaire and testing. By the end of the study, this figure had dropped to 17.8%; diminished participation affected all three groups to a comparable extent. A comparison of dropouts and non-dropouts revealed that older age was a stable predictor for drop-out (bivariate odds ratio [OR] 1.028, p = 0.006; multivariate OR 1.049, p = 0.009). The study participants reported beneficial effects on their health and health awareness, performance ability, psychological balance, stress perception, exercise and dietary behavior. Even under optimal conditions and with high use of staff resources, this model

  15. A Multi-Verse Optimizer with Levy Flights for Numerical Optimization and Its Application in Test Scheduling for Network-on-Chip.

    PubMed

    Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei

    2016-01-01

    We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.

  16. A Multi-Verse Optimizer with Levy Flights for Numerical Optimization and Its Application in Test Scheduling for Network-on-Chip

    PubMed Central

    Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei

    2016-01-01

    We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946

  17. Electrical latency predicts the optimal left ventricular endocardial pacing site: results from a multicentre international registry.

    PubMed

    Sieniewicz, Benjamin J; Behar, Jonathan M; Sohal, Manav; Gould, Justin; Claridge, Simon; Porter, Bradley; Niederer, Steve; Gamble, James H P; Betts, Tim R; Jais, Pierre; Derval, Nicolas; Spragg, David D; Steendijk, Paul; van Gelder, Berry M; Bracke, Frank A; Rinaldi, Christopher A

    2018-04-23

    The optimal site for biventricular endocardial (BIVENDO) pacing remains undefined. Acute haemodynamic response (AHR) is reproducible marker of left ventricular (LV) contractility, best expressed as the change in the maximum rate of LV pressure (LV-dp/dtmax), from a baseline state. We examined the relationship between factors known to impact LV contractility, whilst delivering BIVENDO pacing at a variety of LV endocardial (LVENDO) locations. We compiled a registry of acute LVENDO pacing studies from five international centres: Johns Hopkins-USA, Bordeaux-France, Eindhoven-The Netherlands, Oxford-United Kingdom, and Guys and St Thomas' NHS Foundation Trust, London-UK. In all, 104 patients incorporating 687 endocardial and 93 epicardial pacing locations were studied. Mean age was 66 ± 11 years, mean left ventricular ejection fraction 24.6 ± 7.7% and mean QRS duration of 163 ± 30 ms. In all, 50% were ischaemic [ischaemic cardiomyopathy (ICM)]. Scarred segments were associated with worse haemodynamics (dp/dtmax; 890 mmHg/s vs. 982 mmHg/s, P < 0.01). Delivering BiVENDO pacing in areas of electrical latency was associated with greater improvements in AHR (P < 0.01). Stimulating late activating tissue (LVLED >50%) achieved greater increases in AHR than non-late activating tissue (LVLED < 50%) (8.6 ± 9.6% vs. 16.1 ± 16.2%, P = 0.002). However, the LVENDO pacing location with the latest Q-LV, was associated with the optimal AHR in just 62% of cases. Identifying viable LVENDO tissue which displays late electrical activation is crucial to identifying the optimal BiVENDO pacing site. Stimulating late activating tissue (LVLED >50%) yields greater improvements in AHR however, the optimal location is frequently not the site of latest activation.

  18. Application of controller partitioning optimization procedure to integrated flight/propulsion control design for a STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Schmidt, Phillip H.

    1993-01-01

    A parameter optimization framework has earlier been developed to solve the problem of partitioning a centralized controller into a decentralized, hierarchical structure suitable for integrated flight/propulsion control implementation. This paper presents results from the application of the controller partitioning optimization procedure to IFPC design for a Short Take-Off and Vertical Landing (STOVL) aircraft in transition flight. The controller partitioning problem and the parameter optimization algorithm are briefly described. Insight is provided into choosing various 'user' selected parameters in the optimization cost function such that the resulting optimized subcontrollers will meet the characteristics of the centralized controller that are crucial to achieving the desired closed-loop performance and robustness, while maintaining the desired subcontroller structure constraints that are crucial for IFPC implementation. The optimization procedure is shown to improve upon the initial partitioned subcontrollers and lead to performance comparable to that achieved with the centralized controller. This application also provides insight into the issues that should be addressed at the centralized control design level in order to obtain implementable partitioned subcontrollers.

  19. Academic Optimism and Collective Responsibility: An Organizational Model of the Dynamics of Student Achievement

    ERIC Educational Resources Information Center

    Wu, Jason H.

    2013-01-01

    This study was designed to examine the construct of academic optimism and its relationship with collective responsibility in a sample of Taiwan elementary schools. The construct of academic optimism was tested using confirmatory factor analysis, and the whole structural model was tested with a structural equation modeling analysis. The data were…

  20. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  1. Optimization of thermal processing of canned mussels.

    PubMed

    Ansorena, M R; Salvadori, V O

    2011-10-01

    The design and optimization of thermal processing of solid-liquid food mixtures, such as canned mussels, requires the knowledge of the thermal history at the slowest heating point. In general, this point does not coincide with the geometrical center of the can, and the results show that it is located along the axial axis at a height that depends on the brine content. In this study, a mathematical model for the prediction of the temperature at this point was developed using the discrete transfer function approach. Transfer function coefficients were experimentally obtained, and prediction equations fitted to consider other can dimensions and sampling interval. This model was coupled with an optimization routine in order to search for different retort temperature profiles to maximize a quality index. Both constant retort temperature (CRT) and variable retort temperature (VRT; discrete step-wise and exponential) were considered. In the CRT process, the optimal retort temperature was always between 134 °C and 137 °C, and high values of thiamine retention were achieved. A significant improvement in surface quality index was obtained for optimal VRT profiles compared to optimal CRT. The optimization procedure shown in this study produces results that justify its utilization in the industry.

  2. Optimal Window and Lattice in Gabor Transform. Application to Audio Analysis.

    PubMed

    Lachambre, Helene; Ricaud, Benjamin; Stempfel, Guillaume; Torrésani, Bruno; Wiesmeyr, Christoph; Onchis-Moaca, Darian

    2015-01-01

    This article deals with the use of optimal lattice and optimal window in Discrete Gabor Transform computation. In the case of a generalized Gaussian window, extending earlier contributions, we introduce an additional local window adaptation technique for non-stationary signals. We illustrate our approach and the earlier one by addressing three time-frequency analysis problems to show the improvements achieved by the use of optimal lattice and window: close frequencies distinction, frequency estimation and SNR estimation. The results are presented, when possible, with real world audio signals.

  3. Can paying for results help to achieve the Millennium Development Goals? Overview of the effectiveness of results-based financing.

    PubMed

    Oxman, Andrew D; Fretheim, Atle

    2009-05-01

    Results-based financing and pay-for-performance refer to the transfer of money or material goods conditional on taking a measurable action or achieving a predetermined performance target. Results-based financing is widely advocated for achieving health goals, including the Millennium Development Goals. We undertook an overview of systematic reviews of the effectiveness of RBF. We searched the Cochrane Library, EMBASE, and MEDLINE (up to August 2007). We also searched for related articles in PubMed, checked the reference lists of retrieved articles, and contacted key informants. We included reviews with a methods section that addressed the effects of any results-based financing in the health sector targeted at patients, providers, organizations, or governments. We summarized the characteristics and findings of each review using a structured format. We found 12 systematic reviews that met our inclusion criteria. Based on the findings of these reviews, financial incentives targeting recipients of health care and individual healthcare professionals are effective in the short run for simple and distinct, well-defined behavioral goals. There is less evidence that financial incentives can sustain long-term changes. Conditional cash transfers to poor and disadvantaged groups in Latin America are effective at increasing the uptake of some preventive services. There is otherwise very limited evidence of the effects of results-based financing in low- or middle-income countries. Results-based financing can have undesirable effects, including motivating unintended behaviors, distortions (ignoring important tasks that are not rewarded with incentives), gaming (improving or cheating on reporting rather than improving performance), widening the resource gap between rich and poor, and dependency on financial incentives. There is limited evidence of the effectiveness of results-based financing and almost no evidence of the cost-effectiveness of results-based financing. Based on the

  4. Achieving optimal welfare for the Nile hippopotamus (Hippopotamus amphibius) in North American zoos and aquariums.

    PubMed

    Tennant, Kaylin S; Segura, Valerie D; Morris, Megan C; Snyder, Kristen Denninger; Bocian, David; Maloney, Dan; Maple, Terry L

    2017-07-29

    Compared to other megafauna managed in zoos and aquariums, the current state of welfare for the Nile hippopotamus (Hippopotamus amphibius) is poorly understood. Complex behavior and physiological characteristics make hippos a difficult species to manage. Thus, hippos in managed care are currently at risk for a decreased state of welfare. In an effort to assess and improve conditions for this species, a survey was administered to North American institutions housing Nile hippos. This assessment utilized a multiple-choice format and consisted of questions relating to group structure, behavior, and exhibit design, allowing for the creation of cross-institutional, welfare-based analysis. Responses were gathered from 85.29% of the institutions to which the survey was distributed. Despite recommendations for maintaining groups of at least five individuals (Forthman, 1998), only 34.25% of hippos in North America were housed in groups of three or more. The survey also highlighted that 39.29% of institutions secure their hippos in holding areas overnight, despite their highly active nocturnal propensities. A better understanding of hippo behavior and environmental preferences can be used to inform wellness-oriented management practices to achieve a state of "optimal welfare". Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Multiobjective Particle Swarm Optimization for the optimal design of photovoltaic grid-connected systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kornelakis, Aris

    2010-12-15

    Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Valuemore » (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)« less

  6. Emerging Techniques for Dose Optimization in Abdominal CT

    PubMed Central

    Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit

    2014-01-01

    Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277

  7. Optimal web investment in sub-optimal foraging conditions.

    PubMed

    Harmer, Aaron M T; Kokko, Hanna; Herberstein, Marie E; Madin, Joshua S

    2012-01-01

    Orb web spiders sit at the centre of their approximately circular webs when waiting for prey and so face many of the same challenges as central-place foragers. Prey value decreases with distance from the hub as a function of prey escape time. The further from the hub that prey are intercepted, the longer it takes a spider to reach them and the greater chance they have of escaping. Several species of orb web spiders build vertically elongated ladder-like orb webs against tree trunks, rather than circular orb webs in the open. As ladder web spiders invest disproportionately more web area further from the hub, it is expected they will experience reduced prey gain per unit area of web investment compared to spiders that build circular webs. We developed a model to investigate how building webs in the space-limited microhabitat on tree trunks influences the optimal size, shape and net prey gain of arboricolous ladder webs. The model suggests that as horizontal space becomes more limited, optimal web shape becomes more elongated, and optimal web area decreases. This change in web geometry results in decreased net prey gain compared to webs built without space constraints. However, when space is limited, spiders can achieve higher net prey gain compared to building typical circular webs in the same limited space. Our model shows how spiders optimise web investment in sub-optimal conditions and can be used to understand foraging investment trade-offs in other central-place foragers faced with constrained foraging arenas.

  8. Optimal web investment in sub-optimal foraging conditions

    NASA Astrophysics Data System (ADS)

    Harmer, Aaron M. T.; Kokko, Hanna; Herberstein, Marie E.; Madin, Joshua S.

    2012-01-01

    Orb web spiders sit at the centre of their approximately circular webs when waiting for prey and so face many of the same challenges as central-place foragers. Prey value decreases with distance from the hub as a function of prey escape time. The further from the hub that prey are intercepted, the longer it takes a spider to reach them and the greater chance they have of escaping. Several species of orb web spiders build vertically elongated ladder-like orb webs against tree trunks, rather than circular orb webs in the open. As ladder web spiders invest disproportionately more web area further from the hub, it is expected they will experience reduced prey gain per unit area of web investment compared to spiders that build circular webs. We developed a model to investigate how building webs in the space-limited microhabitat on tree trunks influences the optimal size, shape and net prey gain of arboricolous ladder webs. The model suggests that as horizontal space becomes more limited, optimal web shape becomes more elongated, and optimal web area decreases. This change in web geometry results in decreased net prey gain compared to webs built without space constraints. However, when space is limited, spiders can achieve higher net prey gain compared to building typical circular webs in the same limited space. Our model shows how spiders optimise web investment in sub-optimal conditions and can be used to understand foraging investment trade-offs in other central-place foragers faced with constrained foraging arenas.

  9. Thermal Optimization of an On-Orbit Long Duration Cryogenic Propellant Depot

    NASA Technical Reports Server (NTRS)

    Honour, Ryan; Kwas, Robert; O'Neil, Gary; Kutter, Gary

    2012-01-01

    A Cryogenic Propellant Depot (CPD) operating in Low Earth Orbit (LEO) could provide many near term benefits to NASA's space exploration efforts. These benefits include elongation/extension of spacecraft missions and requirement reduction of launch vehicle up-mass. Some of the challenges include controlling cryogenic propellant evaporation and managing the high costs and long schedules associated with the new development of spacecraft hardware. This paper describes a conceptual CPD design that is thermally optimized to achieve extremely low propellant boil-off rates. The CPD design is based on existing launch vehicle architecture, and its thermal optimization is achieved using current passive thermal control technology. Results from an integrated thermal model are presented showing that this conceptual CPD design can achieve propellant boil-off rates well under 0.05% per day, even when subjected to the LEO thermal environment.

  10. Thermal Optimization and Assessment of a Long Duration Cryogenic Propellant Depot

    NASA Technical Reports Server (NTRS)

    Honour, Ryan; Kwas, Robert; O'Neil, Gary; Kutter, Bernard

    2012-01-01

    A Cryogenic Propellant Depot (CPD) operating in Low Earth Orbit (LEO) could provide many near term benefits to NASA space exploration efforts. These benefits include elongation/extension of spacecraft missions and reduction of launch vehicle up-mass requirements. Some of the challenges include controlling cryogenic propellant evaporation and managing the high costs and long schedules associated with new spacecraft hardware development. This paper describes a conceptual CPD design that is thermally optimized to achieve extremely low propellant boil-off rates. The CPD design is based on existing launch vehicle architecture, and its thermal optimization is achieved using current passive thermal control technology. Results from an integrated thermal model are presented showing that this conceptual CPD design can achieve propellant boil-off rates well under 0.05% per day, even when subjected to the LEO thermal environment.

  11. SynGenics Optimization System (SynOptSys)

    NASA Technical Reports Server (NTRS)

    Ventresca, Carol; McMilan, Michelle L.; Globus, Stephanie

    2013-01-01

    The SynGenics Optimization System (SynOptSys) software application optimizes a product with respect to multiple, competing criteria using statistical Design of Experiments, Response-Surface Methodology, and the Desirability Optimization Methodology. The user is not required to be skilled in the underlying math; thus, SynOptSys can help designers and product developers overcome the barriers that prevent them from using powerful techniques to develop better pro ducts in a less costly manner. SynOpt-Sys is applicable to the design of any product or process with multiple criteria to meet, and at least two factors that influence achievement of those criteria. The user begins with a selected solution principle or system concept and a set of criteria that needs to be satisfied. The criteria may be expressed in terms of documented desirements or defined responses that the future system needs to achieve. Documented desirements can be imported into SynOptSys or created and documented directly within SynOptSys. Subsequent steps include identifying factors, specifying model order for each response, designing the experiment, running the experiment and gathering the data, analyzing the results, and determining the specifications for the optimized system. The user may also enter textual information as the project progresses. Data is easily edited within SynOptSys, and the software design enables full traceability within any step in the process, and facilitates reporting as needed. SynOptSys is unique in the way responses are defined and the nuances of the goodness associated with changes in response values for each of the responses of interest. The Desirability Optimization Methodology provides the basis of this novel feature. Moreover, this is a complete, guided design and optimization process tool with embedded math that can remain invisible to the user. It is not a standalone statistical program; it is a design and optimization system.

  12. Optimization of the Upper Surface of Hypersonic Vehicle Based on CFD Analysis

    NASA Astrophysics Data System (ADS)

    Gao, T. Y.; Cui, K.; Hu, S. C.; Wang, X. P.; Yang, G. W.

    2011-09-01

    For the hypersonic vehicle, the aerodynamic performance becomes more intensive. Therefore, it is a significant event to optimize the shape of the hypersonic vehicle to achieve the project demands. It is a key technology to promote the performance of the hypersonic vehicle with the method of shape optimization. Based on the existing vehicle, the optimization to the upper surface of the Simplified hypersonic vehicle was done to obtain a shape which suits the project demand. At the cruising condition, the upper surface was parameterized with the B-Spline curve method. The incremental parametric method and the reconstruction technology of the local mesh were applied here. The whole flow field was been calculated and the aerodynamic performance of the craft were obtained by the computational fluid dynamic (CFD) technology. Then the vehicle shape was optimized to achieve the maximum lift-drag ratio at attack angle 3°, 4° and 5°. The results will provide the reference for the practical design.

  13. [Optimize preparation of compound licorice microemulsion with D-optimal design].

    PubMed

    Ma, Shu-Wei; Wang, Yong-Jie; Chen, Cheng; Qiu, Yue; Wu, Qing

    2018-03-01

    In order to increase the solubility of essential oil in compound licorice microemulsion and improve the efficacy of the decoction for treating chronic eczema, this experiment intends to prepare the decoction into microemulsion. The essential oil was used as the oil phase of the microemulsion and the extract was used as the water phase. Then the microemulsion area and maximum ratio of water capacity was obtained by plotting pseudo-ternary phase diagram, to determine the appropriate types of surfactant and cosurfactant, and Km value-the mass ratio between surfactant and cosurfactant. With particle size and skin retention of active ingredients as the index, microemulsion prescription was optimized by D-optimal design method, to investigate the in vitro release behavior of the optimized prescription. The results showed that the microemulsion was optimal with tween-80 as the surfactant and anhydrous ethanol as the cosurfactant. When the Km value was 1, the area of the microemulsion region was largest while when the concentration of extract was 0.5 g·mL⁻¹, it had lowest effect on the particle size distribution of microemulsion. The final optimized formulation was as follows: 9.4% tween-80, 9.4% anhydrous ethanol, 1.0% peppermint oil and 80.2% 0.5 g·mL⁻¹ extract. The microemulsion prepared under these conditions had a small viscosity, good stability and high skin retention of drug; in vitro release experiment showed that microemulsion had a sustained-release effect on glycyrrhizic acid and liquiritin, basically achieving the expected purpose of the project. Copyright© by the Chinese Pharmaceutical Association.

  14. Enhanced index tracking modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin

    2013-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  15. Pulmonary function in pubertal synchronized swimmers: 1-year follow-up results and its relation to competitive achievement.

    PubMed

    Gabrilo, Goran; Peric, Mia; Stipic, Marija

    2011-03-01

    Pulmonary function (PF) is particularly important in synchronized swimming, considering the characteristics of this sport. However, the sanitizing agents (chlorine) used in pools can have a possible negative influence on the PF parameters. In this study, we observed 24 swimmers (all women, 14 to 16 years of age) and measured their PF and competitive achievement. PF was measured before and after a 1-year period and included standard spirometric variables. Competitive achievement was evidenced during the National Championship. The t-test showed significant increases in body height and weight of the participants and a resulting increase in most of the absolute respiratory flows and pulmonary capacities. Forced vital capacity (FVC) and forced expiratory volume (both in proportion to norm for body height, gender, and age) increased significantly within the study period. FVC significantly predicted the competitive achievement of young swimmers, most probably because artists have to achieve exceptional breath control when upside down underwater. In conclusion, we found no evidence for the eventual negative influence of chlorine and its compounds on the PF of swimmers, and results showed that regular synchronized swim training could improve the PF of young artists.

  16. Optimizing Resource and Energy Recovery for Municipal Solid Waste Management

    EPA Science Inventory

    Significant reductions of carbon emissions and air quality impacts can be achieved by optimizing municipal solid waste (MSW) as a resource. Materials and discards management were found to contribute ~40% of overall U.S. GHG emissions as a result of materials extraction, transpo...

  17. Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour

    PubMed Central

    Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad

    2013-01-01

    A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843

  18. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  19. Numerical difficulties associated with using equality constraints to achieve multi-level decomposition in structural optimization

    NASA Technical Reports Server (NTRS)

    Thareja, R.; Haftka, R. T.

    1986-01-01

    There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.

  20. Optimized emission in nanorod arrays through quasi-aperiodic inverse design.

    PubMed

    Anderson, P Duke; Povinelli, Michelle L

    2015-06-01

    We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.

  1. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals

    PubMed Central

    Matt, Dominik T.

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system. PMID:29065578

  2. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals.

    PubMed

    Arcidiacono, Gabriele; Matt, Dominik T; Rauch, Erwin

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system.

  3. Optimal visual-haptic integration with articulated tools.

    PubMed

    Takahashi, Chie; Watt, Simon J

    2017-05-01

    When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.

  4. TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagerstrom, J; Culberson, W; Bender, E

    2016-06-15

    Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less

  5. Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.

    PubMed

    Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W

    2016-01-01

    To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.

  6. Optimal Control Modification Adaptive Law for Time-Scale Separated Systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.

  7. Does achievement motivation mediate the semantic achievement priming effect?

    PubMed

    Engeser, Stefan; Baumann, Nicola

    2014-10-01

    The aim of our research was to understand the processes of the prime-to-behavior effects with semantic achievement primes. We extended existing models with a perspective from achievement motivation theory and additionally used achievement primes embedded in the running text of excerpts of school textbooks to simulate a more natural priming condition. Specifically, we proposed that achievement primes affect implicit achievement motivation and conducted pilot experiments and 3 main experiments to explore this proposition. We found no reliable positive effect of achievement primes on implicit achievement motivation. In light of these findings, we tested whether explicit (instead of implicit) achievement motivation is affected by achievement primes and found this to be the case. In the final experiment, we found support for the assumption that higher explicit achievement motivation implies that achievement priming affects the outcome expectations. The implications of the results are discussed, and we conclude that primes affect achievement behavior by heightening explicit achievement motivation and outcome expectancies.

  8. Tool Support for Software Lookup Table Optimization

    DOE PAGES

    Wilcox, Chris; Strout, Michelle Mills; Bieman, James M.

    2011-01-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology andmore » tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0× and 6.9× for two molecular biology algorithms, 1.4× for a molecular dynamics program, 2.1× to 2.8× for a neural network application, and 4.6× for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.« less

  9. Identification of vehicle suspension parameters by design optimization

    NASA Astrophysics Data System (ADS)

    Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.

    2014-05-01

    The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.

  10. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian

  11. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast

  12. Multidisciplinary Aerospace Systems Optimization: Computational AeroSciences (CAS) Project

    NASA Technical Reports Server (NTRS)

    Kodiyalam, S.; Sobieski, Jaroslaw S. (Technical Monitor)

    2001-01-01

    The report describes a method for performing optimization of a system whose analysis is so expensive that it is impractical to let the optimization code invoke it directly because excessive computational cost and elapsed time might result. In such situation it is imperative to have user control the number of times the analysis is invoked. The reported method achieves that by two techniques in the Design of Experiment category: a uniform dispersal of the trial design points over a n-dimensional hypersphere and a response surface fitting, and the technique of krigging. Analyses of all the trial designs whose number may be set by the user are performed before activation of the optimization code and the results are stored as a data base. That code is then executed and referred to the above data base. Two applications, one of the airborne laser system, and one of an aircraft optimization illustrate the method application.

  13. The role of principal in optimizing school climate in primary schools

    NASA Astrophysics Data System (ADS)

    Murtedjo; Suharningsih

    2018-01-01

    This article was written based on the occurrence of elementary school changes that never counted because of the low quality, became the school of choice of the surrounding community with the many national achievements ever achieved. This article is based on research data conducted in primary schools. In this paper focused on the role of school principals in an effort to optimize school climate. To describe the principal’s role in optimizing school climate using a qualitative approach to the design of Multi-Site Study. The appointment of the informant was done by snowball technique. Data collection through in-depth interviews, participant observation, and documentation. Data credibility checking uses triangulation techniques, member checks, and peer discussions. Auditability is performed by the auditor. The collected data is analyzed by site analysis and cross-site analysis. The result of the research shows that the principal in optimizing the conducive school climate by creating the physical condition of the school and the socio-emotional condition is pleasant, so that the teachers in implementing the learning process become passionate, happy learners which ultimately improve their learning achievement and can improve the school quality.

  14. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  15. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  16. Optimized multi-electrode stimulation increases focality and intensity at target

    NASA Astrophysics Data System (ADS)

    Dmochowski, Jacek P.; Datta, Abhishek; Bikson, Marom; Su, Yuzhuo; Parra, Lucas C.

    2011-08-01

    Transcranial direct current stimulation (tDCS) provides a non-invasive tool to elicit neuromodulation by delivering current through electrodes placed on the scalp. The present clinical paradigm uses two relatively large electrodes to inject current through the head resulting in electric fields that are broadly distributed over large regions of the brain. In this paper, we present a method that uses multiple small electrodes (i.e. 1.2 cm diameter) and systematically optimize the applied currents to achieve effective and targeted stimulation while ensuring safety of stimulation. We found a fundamental trade-off between achievable intensity (at the target) and focality, and algorithms to optimize both measures are presented. When compared with large pad-electrodes (approximated here by a set of small electrodes covering 25cm2), the proposed approach achieves electric fields which exhibit simultaneously greater focality (80% improvement) and higher target intensity (98% improvement) at cortical targets using the same total current applied. These improvements illustrate the previously unrecognized and non-trivial dependence of the optimal electrode configuration on the desired electric field orientation and the maximum total current (due to safety). Similarly, by exploiting idiosyncratic details of brain anatomy, the optimization approach significantly improves upon prior un-optimized approaches using small electrodes. The analysis also reveals the optimal use of conventional bipolar montages: maximally intense tangential fields are attained with the two electrodes placed at a considerable distance from the target along the direction of the desired field; when radial fields are desired, the maximum-intensity configuration consists of an electrode placed directly over the target with a distant return electrode. To summarize, if a target location and stimulation orientation can be defined by the clinician, then the proposed technique is superior in terms of both focality

  17. Integrating machine learning to achieve an automatic parameter prediction for practical continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Liu, Weiqi; Huang, Peng; Peng, Jinye; Fan, Jianping; Zeng, Guihua

    2018-02-01

    For supporting practical quantum key distribution (QKD), it is critical to stabilize the physical parameters of signals, e.g., the intensity, phase, and polarization of the laser signals, so that such QKD systems can achieve better performance and practical security. In this paper, an approach is developed by integrating a support vector regression (SVR) model to optimize the performance and practical security of the QKD system. First, a SVR model is learned to precisely predict the time-along evolutions of the physical parameters of signals. Second, such predicted time-along evolutions are employed as feedback to control the QKD system for achieving the optimal performance and practical security. Finally, our proposed approach is exemplified by using the intensity evolution of laser light and a local oscillator pulse in the Gaussian modulated coherent state QKD system. Our experimental results have demonstrated three significant benefits of our SVR-based approach: (1) it can allow the QKD system to achieve optimal performance and practical security, (2) it does not require any additional resources and any real-time monitoring module to support automatic prediction of the time-along evolutions of the physical parameters of signals, and (3) it is applicable to any measurable physical parameter of signals in the practical QKD system.

  18. Genetic Algorithm Application in Optimization of Wireless Sensor Networks

    PubMed Central

    Norouzi, Ali; Zaim, A. Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs. PMID:24693235

  19. Acute bacterial endocarditis. Optimizing surgical results.

    PubMed

    Larbalestier, R I; Kinchla, N M; Aranki, S F; Couper, G S; Collins, J J; Cohn, L H

    1992-11-01

    Acute bacterial endocarditis continues to be a condition with high morbidity. Although the majority of patients are treated by high-dose antibiotics, a high-risk patient group requires surgical intervention, which is the subject of this article. From 1972 to 1991, 3,820 patients underwent heart valve replacement at the Brigham and Women's Hospital, Boston. Of this group, 158 patients underwent surgery for acute bacterial endocarditis: 109 had native valve endocarditis (NVE), and 49 had prosthetic valve endocarditis (PVE). There were 108 men and 50 women with a mean age of 49 years (range, 16-79 years); 64% were New York Heart Association functional class IV before surgery, and 12% of the group had a history of intravenous drug abuse. In both NVE and PVE groups, Streptococcus was the predominant infecting agent. Uncontrolled sepsis, progressive congestive failure, peripheral emboli, and echocardiographically demonstrated vegetations were the most common indications for surgery. Eighty-five percent of patients had a single-valve procedure, 15% had a multivalve procedure, and 34 patients had other associated major cardiac procedures. The operative mortality was 6% in NVE and 22% in PVE. Long-term survival at 10 years was 66% for NVE and 29% for PVE. Freedom from recurrent endocarditis at 10 years was 85% for NVE and 82% for PVE. The main factors associated with decreased survival overall were PVE and nonstreptococcal infection. The morbidity and mortality after surgical treatment of acute endocarditis depend on the site, the severity, and the subject infected. Early aggressive surgical intervention is indicated to optimize surgical results, especially in patients with nonstreptococcal infection or PVE.

  20. Optimization of the cooling profile to achieve crack-free Yb:S-FAP crystals

    NASA Astrophysics Data System (ADS)

    Fang, H. S.; Qiu, S. R.; Zheng, L. L.; Schaffers, K. I.; Tassano, J. B.; Caird, J. A.; Zhang, H.

    2008-08-01

    Yb:S-FAP [Yb 3+:Sr 5(PO 4) 3F] crystals are an important gain medium for diode-pumped laser applications. Growth of 7.0 cm diameter Yb:S-FAP crystals utilizing the Czochralski (CZ) method from SrF 2-rich melts often encounters cracks during the post-growth cool-down stage. To suppress cracking during cool-down, a numerical simulation of the growth system was used to understand the correlation between the furnace power during cool-down and the radial temperature differences within the crystal. The critical radial temperature difference, above which the crystal cracks, has been determined by benchmarking the simulation results against experimental observations. Based on this comparison, an optimal three-stage ramp-down profile was implemented, which produced high-quality, crack-free Yb:S-FAP crystals.

  1. Energy efficient LED layout optimization for near-uniform illumination

    NASA Astrophysics Data System (ADS)

    Ali, Ramy E.; Elgala, Hany

    2016-09-01

    In this paper, we consider the problem of designing energy efficient light emitting diodes (LEDs) layout while satisfying the illumination constraints. Towards this objective, we present a simple approach to the illumination design problem based on the concept of the virtual LED. We formulate a constrained optimization problem for minimizing the power consumption while maintaining a near-uniform illumination throughout the room. By solving the resulting constrained linear program, we obtain the number of required LEDs and the optimal output luminous intensities that achieve the desired illumination constraints.

  2. Dithiothreitol-Regulated Coverage of Oligonucleotide-Modified Gold Nanoparticles To Achieve Optimized Biosensor Performance.

    PubMed

    Liang, Pingping; Canoura, Juan; Yu, Haixiang; Alkhamis, Obtin; Xiao, Yi

    2018-01-31

    DNA-modified gold nanoparticles (AuNPs) are useful signal-reporters for detecting diverse molecules through various hybridization- and enzyme-based assays. However, their performance is heavily dependent on the probe DNA surface coverage, which can influence both target binding and enzymatic processing of the bound probes. Current methods used to adjust the surface coverage of DNA-modified AuNPs require the production of multiple batches of AuNPs under different conditions, which is costly and laborious. We here develop a single-step assay utilizing dithiothreitol (DTT) to fine-tune the surface coverage of DNA-modified AuNPs. DTT is superior to the commonly used surface diluent, mercaptohexanol, as it is less volatile, allowing for the rapid and reproducible controlling of surface coverage on AuNPs with only micromolar concentrations of DTT. Upon adsorption, DTT forms a dense monolayer on gold surfaces, which provides antifouling capabilities. Furthermore, surface-bound DTT adopts a cyclic conformation, which reorients DNA probes into an upright position and provides ample space to promote DNA hybridization, aptamer assembly, and nuclease digestion. We demonstrate the effects of surface coverage on AuNP-based sensors using DTT-regulated DNA-modified AuNPs. We then use these AuNPs to visually detect DNA and cocaine in colorimetric assays based on enzyme-mediated AuNP aggregation. We determine that DTT-regulated AuNPs with lower surface coverage achieve shorter reaction times and lower detection limits relative to those for assays using untreated AuNPs or DTT-regulated AuNPs with high surface coverage. Additionally, we demonstrate that our DTT-regulated AuNPs can perform cocaine detection in 50% urine without any significant matrix effects. We believe that DTT regulation of surface coverage can be broadly employed for optimizing DNA-modified AuNP performance for use in biosensors as well as drug delivery and therapeutic applications.

  3. The Effects of Brain Based Learning Approach on Motivation and Students Achievement in Mathematics Learning

    NASA Astrophysics Data System (ADS)

    Mekarina, M.; Ningsih, Y. P.

    2017-09-01

    This classroom action research is based by the facts that the students motivation and achievement mathematics learning is less. One of the factors causing is learning that does not provide flexibility to students to empower the potential of the brain optimally. The aim of this research was to improve the student motivation and achievement in mathematics learning by implementing brain based learning approach. The subject of this research was student of grade XI in senior high school. The research consisted of two cycles. Data of student achievement from test, and the student motivation through questionnaire. Furthermore, the finding of this research showed the result of the analysis was the implementation of brain based learning approach can improve student’s achievement and motivation in mathematics learning.

  4. Effective Teaching Results in Increased Science Achievement for All Students

    ERIC Educational Resources Information Center

    Johnson, Carla C.; Kahle, Jane Butler; Fargo, Jamison D.

    2007-01-01

    This study of teacher effectiveness and student achievement in science demonstrated that effective teachers positively impact student learning. A general linear mixed model was used to assess change in student scores on the Discovery Inquiry Test as a function of time, race, teacher effectiveness, gender, and impact of teacher effectiveness in…

  5. Optimal shutdown management

    NASA Astrophysics Data System (ADS)

    Bottasso, C. L.; Croce, A.; Riboldi, C. E. D.

    2014-06-01

    The paper presents a novel approach for the synthesis of the open-loop pitch profile during emergency shutdowns. The problem is of interest in the design of wind turbines, as such maneuvers often generate design driving loads on some of the machine components. The pitch profile synthesis is formulated as a constrained optimal control problem, solved numerically using a direct single shooting approach. A cost function expressing a compromise between load reduction and rotor overspeed is minimized with respect to the unknown blade pitch profile. Constraints may include a load reduction not-to-exceed the next dominating loads, a not-to-be-exceeded maximum rotor speed, and a maximum achievable blade pitch rate. Cost function and constraints are computed over a possibly large number of operating conditions, defined so as to cover as well as possible the operating situations encountered in the lifetime of the machine. All such conditions are simulated by using a high-fidelity aeroservoelastic model of the wind turbine, ensuring the accuracy of the evaluation of all relevant parameters. The paper demonstrates the capabilities of the novel proposed formulation, by optimizing the pitch profile of a multi-MW wind turbine. Results show that the procedure can reliably identify optimal pitch profiles that reduce design-driving loads, in a fully automated way.

  6. Intensity modulated neutron radiotherapy optimization by photon proxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Michael; Hammoud, Ahmad; Bossenberger, Todd

    2012-08-15

    Purpose: Introducing intensity modulation into neutron radiotherapy (IMNRT) planning has the potential to mitigate some normal tissue complications seen in past neutron trials. While the hardware to deliver IMNRT plans has been in use for several years, until recently the IMNRT planning process has been cumbersome and of lower fidelity than conventional photon plans. Our in-house planning system used to calculate neutron therapy plans allows beam weight optimization of forward planned segments, but does not provide inverse optimization capabilities. Commercial treatment planning systems provide inverse optimization capabilities, but currently cannot model our neutron beam. Methods: We have developed a methodologymore » and software suite to make use of the robust optimization in our commercial planning system while still using our in-house planning system to calculate final neutron dose distributions. Optimized multileaf collimator (MLC) leaf positions for segments designed in the commercial system using a 4 MV photon proxy beam are translated into static neutron ports that can be represented within our in-house treatment planning system. The true neutron dose distribution is calculated in the in-house system and then exported back through the MATLAB software into the commercial treatment planning system for evaluation. Results: The planning process produces optimized IMNRT plans that reduce dose to normal tissue structures as compared to 3D conformal plans using static MLC apertures. The process involves standard planning techniques using a commercially available treatment planning system, and is not significantly more complex than conventional IMRT planning. Using a photon proxy in a commercial optimization algorithm produces IMNRT plans that are more conformal than those previously designed at our center and take much less time to create. Conclusions: The planning process presented here allows for the optimization of IMNRT plans by a commercial treatment

  7. Reactive power optimization strategy considering analytical impedance ratio

    NASA Astrophysics Data System (ADS)

    Wu, Zhongchao; Shen, Weibing; Liu, Jinming; Guo, Maoran; Zhang, Shoulin; Xu, Keqiang; Wang, Wanjun; Sui, Jinlong

    2017-05-01

    In this paper, considering the traditional reactive power optimization cannot realize the continuous voltage adjustment and voltage stability, a dynamic reactive power optimization strategy is proposed in order to achieve both the minimization of network loss and high voltage stability with wind power. Due to the fact that wind power generation is fluctuant and uncertain, electrical equipments such as transformers and shunt capacitors may be operated frequently in order to achieve minimization of network loss, which affect the lives of these devices. In order to solve this problem, this paper introduces the derivation process of analytical impedance ratio based on Thevenin equivalent. Thus, the multiple objective function is proposed to minimize the network loss and analytical impedance ratio. Finally, taking the improved IEEE 33-bus distribution system as example, the result shows that the movement of voltage control equipment has been reduced and network loss increment is controlled at the same time, which proves the applicable value of this strategy.

  8. Optimized Orthovoltage Stereotactic Radiosurgery

    NASA Astrophysics Data System (ADS)

    Fagerstrom, Jessica M.

    Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light

  9. Design and multi-physics optimization of rotary MRF brakes

    NASA Astrophysics Data System (ADS)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  10. Control strategy optimization of HVAC plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio

    In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less

  11. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  12. An Analysis of the DER Adoption Climate in Japan UsingOptimization Results for Prototype Buildings with U.S. Comparisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Nan; Marnay, Chris; Firestone, Ryan

    2006-06-16

    This research demonstrates economically optimal distributedenergy resource (DER) system choice using the DER choice and operationsoptimization program, the Distributed Energy Resources Customer AdoptionModel (DER-CAM). DER-CAM finds the optimal combination of installedequipment given prevailing utility tariffs and fuel prices, siteelectrical and thermal loads (including absorption cooling), and a menuof available equipment. It provides a global optimization, albeitidealized, that shows how site useful energy loads can be served atminimum cost. Five prototype Japanese commercial buildings are examinedand DER-CAM is applied to select the economically optimal DER system foreach. Based on the optimization results, energy and emission reductionsare evaluated. Significant decreases in fuelmore » consumption, carbonemissions, and energy costs were seen in the DER-CAM results. Savingswere most noticeable in the prototype sports facility, followed by thehospital, hotel, and office building. Results show that DER with combinedheat and power equipment is a promising efficiency and carbon mitigationstrategy, but that precise system design is necessary. Furthermore, aJapan-U.S. comparison study of policy, technology, and utility tariffsrelevant to DER installation is presented.« less

  13. Optimization of the cooling profile to achieve crack-free Yb:S-FAP crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, H; Qiu, S; Kheng, L

    Yb:S-FAP [Yb{sup 3+}:Sr{sub 5}(PO{sub 4}){sub 3}F] crystals are an important gain medium for diode-pumped laser applications. Growth of 7.0 cm diameter Yb:S-FAP crystals utilizing the Czochralski (CZ) method from SrF{sub 2}-rich melts often encounter cracks during the post growth cool down stage. To suppress cracking during cool down, a numerical simulation of the growth system was used to understand the correlation between the furnace power during cool down and the radial temperature differences within the crystal. The critical radial temperature difference, above which the crystal cracks, has been determined by benchmarking the simulation results against experimental observations. Based on thismore » comparison, an optimal three-stage ramp-down profile was implemented and produced high quality, crack-free Yb:S-FAP crystals.« less

  14. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals

    PubMed

    Arcidiacono, Gabriele; Matt, Dominik T.; Rauch, Erwin

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system. © 2017 Gabriele Arcidiacono et al.

  15. Optimal Implementations for Reliable Circadian Clocks

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko; Arita, Masanori

    2014-09-01

    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  16. Combined micromechanical and fabrication process optimization for metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Morel, M.; Saravanos, D. A.; Chamis, C. C.

    1991-01-01

    A method is presented to minimize the residual matrix stresses in metal matrix composites. Fabrication parameters such as temperature and consolidation pressure are optimized concurrently with the characteristics (i.e., modulus, coefficient of thermal expansion, strength, and interphase thickness) of a fiber-matrix interphase. By including the interphase properties in the fabrication process, lower residual stresses are achievable. Results for an ultra-high modulus graphite (P100)/copper composite show a reduction of 21 percent for the maximum matrix microstress when optimizing the fabrication process alone. Concurrent optimization of the fabrication process and interphase properties show a 41 percent decrease in the maximum microstress. Therefore, this optimization method demonstrates the capability of reducing residual microstresses by altering the temperature and consolidation pressure histories and tailoring the interphase properties for an improved composite material. In addition, the results indicate that the consolidation pressures are the most important fabrication parameters, and the coefficient of thermal expansion is the most critical interphase property.

  17. Using string invariants for prediction searching for optimal parameters

    NASA Astrophysics Data System (ADS)

    Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard

    2016-02-01

    We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.

  18. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of

  19. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    PubMed

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of

  20. Ant Lion Optimization algorithm for kidney exchanges.

    PubMed

    Hamouda, Eslam; El-Metwally, Sara; Tarek, Mayada

    2018-01-01

    The kidney exchange programs bring new insights in the field of organ transplantation. They make the previously not allowed surgery of incompatible patient-donor pairs easier to be performed on a large scale. Mathematically, the kidney exchange is an optimization problem for the number of possible exchanges among the incompatible pairs in a given pool. Also, the optimization modeling should consider the expected quality-adjusted life of transplant candidates and the shortage of computational and operational hospital resources. In this article, we introduce a bio-inspired stochastic-based Ant Lion Optimization, ALO, algorithm to the kidney exchange space to maximize the number of feasible cycles and chains among the pool pairs. Ant Lion Optimizer-based program achieves comparable kidney exchange results to the deterministic-based approaches like integer programming. Also, ALO outperforms other stochastic-based methods such as Genetic Algorithm in terms of the efficient usage of computational resources and the quantity of resulting exchanges. Ant Lion Optimization algorithm can be adopted easily for on-line exchanges and the integration of weights for hard-to-match patients, which will improve the future decisions of kidney exchange programs. A reference implementation for ALO algorithm for kidney exchanges is written in MATLAB and is GPL licensed. It is available as free open-source software from: https://github.com/SaraEl-Metwally/ALO_algorithm_for_Kidney_Exchanges.

  1. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  2. VMAT optimization with dynamic collimator rotation.

    PubMed

    Lyu, Qihui; O'Connor, Daniel; Ruan, Dan; Yu, Victoria; Nguyen, Dan; Sheng, Ke

    2018-04-16

    Although collimator rotation is an optimization variable that can be exploited for dosimetric advantages, existing Volumetric Modulated Arc Therapy (VMAT) optimization uses a fixed collimator angle in each arc and only rotates the collimator between arcs. In this study, we develop a novel integrated optimization method for VMAT, accounting for dynamic collimator angles during the arc motion. Direct Aperture Optimization (DAO) for Dynamic Collimator in VMAT (DC-VMAT) was achieved by adding to the existing dose fidelity objective an anisotropic total variation term for regulating the fluence smoothness, a binary variable for forming simple apertures, and a group sparsity term for controlling collimator rotation. The optimal collimator angle for each beam angle was selected using the Dijkstra's algorithm, where the node costs depend on the estimated fluence map at the current iteration and the edge costs account for the mechanical constraints of multi-leaf collimator (MLC). An alternating optimization strategy was implemented to solve the DAO and collimator angle selection (CAS). Feasibility of DC-VMAT using one full-arc with dynamic collimator rotation was tested on a phantom with two small spherical targets, a brain, a lung and a prostate cancer patient. The plan was compared against a static collimator VMAT (SC-VMAT) plan using three full arcs with 60 degrees of collimator angle separation in patient studies. With the same target coverage, DC-VMAT achieved 20.3% reduction of R50 in the phantom study, and reduced the average max and mean OAR dose by 4.49% and 2.53% of the prescription dose in patient studies, as compared with SC-VMAT. The collimator rotation co-ordinated with the gantry rotation in DC-VMAT plans for deliverability. There were 13 beam angles in the single-arc DC-VMAT plan in patient studies that requires slower gantry rotation to accommodate multiple collimator angles. The novel DC-VMAT approach utilizes the dynamic collimator rotation during arc

  3. Optimal control of complex atomic quantum systems

    PubMed Central

    van Frank, S.; Bonneau, M.; Schmiedmayer, J.; Hild, S.; Gross, C.; Cheneau, M.; Bloch, I.; Pichler, T.; Negretti, A.; Calarco, T.; Montangero, S.

    2016-01-01

    Quantum technologies will ultimately require manipulating many-body quantum systems with high precision. Cold atom experiments represent a stepping stone in that direction: a high degree of control has been achieved on systems of increasing complexity. However, this control is still sub-optimal. In many scenarios, achieving a fast transformation is crucial to fight against decoherence and imperfection effects. Optimal control theory is believed to be the ideal candidate to bridge the gap between early stage proof-of-principle demonstrations and experimental protocols suitable for practical applications. Indeed, it can engineer protocols at the quantum speed limit – the fastest achievable timescale of the transformation. Here, we demonstrate such potential by computing theoretically and verifying experimentally the optimal transformations in two very different interacting systems: the coherent manipulation of motional states of an atomic Bose-Einstein condensate and the crossing of a quantum phase transition in small systems of cold atoms in optical lattices. We also show that such processes are robust with respect to perturbations, including temperature and atom number fluctuations. PMID:27725688

  4. Optimal control of complex atomic quantum systems.

    PubMed

    van Frank, S; Bonneau, M; Schmiedmayer, J; Hild, S; Gross, C; Cheneau, M; Bloch, I; Pichler, T; Negretti, A; Calarco, T; Montangero, S

    2016-10-11

    Quantum technologies will ultimately require manipulating many-body quantum systems with high precision. Cold atom experiments represent a stepping stone in that direction: a high degree of control has been achieved on systems of increasing complexity. However, this control is still sub-optimal. In many scenarios, achieving a fast transformation is crucial to fight against decoherence and imperfection effects. Optimal control theory is believed to be the ideal candidate to bridge the gap between early stage proof-of-principle demonstrations and experimental protocols suitable for practical applications. Indeed, it can engineer protocols at the quantum speed limit - the fastest achievable timescale of the transformation. Here, we demonstrate such potential by computing theoretically and verifying experimentally the optimal transformations in two very different interacting systems: the coherent manipulation of motional states of an atomic Bose-Einstein condensate and the crossing of a quantum phase transition in small systems of cold atoms in optical lattices. We also show that such processes are robust with respect to perturbations, including temperature and atom number fluctuations.

  5. Taking It to the Top: A Lesson in Search Engine Optimization

    ERIC Educational Resources Information Center

    Frydenberg, Mark; Miko, John S.

    2011-01-01

    Search engine optimization (SEO), the promoting of a Web site so it achieves optimal position with a search engine's rankings, is an important strategy for organizations and individuals in order to promote their brands online. Techniques for achieving SEO are relevant to students of marketing, computing, media arts, and other disciplines, and many…

  6. Electrospinning fundamentals: optimizing solution and apparatus parameters.

    PubMed

    Leach, Michelle K; Feng, Zhang-Qi; Tuck, Samuel J; Corey, Joseph M

    2011-01-21

    Electrospun nanofiber scaffolds have been shown to accelerate the maturation, improve the growth, and direct the migration of cells in vitro. Electrospinning is a process in which a charged polymer jet is collected on a grounded collector; a rapidly rotating collector results in aligned nanofibers while stationary collectors result in randomly oriented fiber mats. The polymer jet is formed when an applied electrostatic charge overcomes the surface tension of the solution. There is a minimum concentration for a given polymer, termed the critical entanglement concentration, below which a stable jet cannot be achieved and no nanofibers will form - although nanoparticles may be achieved (electrospray). A stable jet has two domains, a streaming segment and a whipping segment. While the whipping jet is usually invisible to the naked eye, the streaming segment is often visible under appropriate lighting conditions. Observing the length, thickness, consistency and movement of the stream is useful to predict the alignment and morphology of the nanofibers being formed. A short, non-uniform, inconsistent, and/or oscillating stream is indicative of a variety of problems, including poor fiber alignment, beading, splattering, and curlicue or wavy patterns. The stream can be optimized by adjusting the composition of the solution and the configuration of the electrospinning apparatus, thus optimizing the alignment and morphology of the fibers being produced. In this protocol, we present a procedure for setting up a basic electrospinning apparatus, empirically approximating the critical entanglement concentration of a polymer solution and optimizing the electrospinning process. In addition, we discuss some common problems and troubleshooting techniques.

  7. Efficiency and optimal size of hospitals: Results of a systematic search

    PubMed Central

    Guglielmo, Annamaria

    2017-01-01

    Background National Health Systems managers have been subject in recent years to considerable pressure to increase concentration and allow mergers. This pressure has been justified by a belief that larger hospitals lead to lower average costs and better clinical outcomes through the exploitation of economies of scale. In this context, the opportunity to measure scale efficiency is crucial to address the question of optimal productive size and to manage a fair allocation of resources. Methods and findings This paper analyses the stance of existing research on scale efficiency and optimal size of the hospital sector. We performed a systematic search of 45 past years (1969–2014) of research published in peer-reviewed scientific journals recorded by the Social Sciences Citation Index concerning this topic. We classified articles by the journal’s category, research topic, hospital setting, method and primary data analysis technique. Results showed that most of the studies were focussed on the analysis of technical and scale efficiency or on input / output ratio using Data Envelopment Analysis. We also find increasing interest concerning the effect of possible changes in hospital size on quality of care. Conclusions Studies analysed in this review showed that economies of scale are present for merging hospitals. Results supported the current policy of expanding larger hospitals and restructuring/closing smaller hospitals. In terms of beds, studies reported consistent evidence of economies of scale for hospitals with 200–300 beds. Diseconomies of scale can be expected to occur below 200 beds and above 600 beds. PMID:28355255

  8. Quad-rotor flight path energy optimization

    NASA Astrophysics Data System (ADS)

    Kemper, Edward

    Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

  9. Matching achievement contexts with implicit theories to maximize motivation after failure: a congruence model.

    PubMed

    El-Alayli, Amani

    2006-12-01

    Previous research has shown that matching person variables with achievement contexts can produce the best motivational outcomes. The current study examines whether this is also true when matching entity and incremental beliefs with the appropriate motivational climate. Participants were led to believe that a personal attribute was fixed (entity belief) or malleable (incremental belief). After thinking that they failed a test that assessed the attribute, participants performed a second (related) task in a context that facilitated the pursuit of either performance or learning goals. Participants were expected to exhibit greater effort on the second task in the congruent conditions (entity belief plus performance goal climate and incremental belief plus learning goal climate) than in the incongruent conditions. These results were obtained, but only for participants who either valued competence on the attribute or had high achievement motivation. Results are discussed in terms of developing strategies for optimizing motivation in achievement settings.

  10. Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization.

    PubMed

    He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang

    2016-01-01

    Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO.

  11. Optimizing acoustical conditions for speech intelligibility in classrooms

    NASA Astrophysics Data System (ADS)

    Yang, Wonyoung

    High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with

  12. A programing system for research and applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.

    1981-01-01

    The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints.

  13. Numerical modeling and optimization of the Iguassu gas centrifuge

    NASA Astrophysics Data System (ADS)

    Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.

    2017-07-01

    The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.

  14. Wind Turbine Optimization with WISDEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykes, Katherine L; Damiani, Rick R; Graf, Peter A

    This presentation for the Fourth Wind Energy Systems Engineering Workshop explains the NREL wind energy systems engineering initiative-developed analysis platform and research capability to capture important system interactions to achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. Topics include Wind-Plant Integrated System Design and Engineering Model (WISDEM) and multidisciplinary design analysis and optimization.

  15. Optimal growth trajectories with finite carrying capacity.

    PubMed

    Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  16. Optimal growth trajectories with finite carrying capacity

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  17. Optimal Solution for an Engineering Applications Using Modified Artificial Immune System

    NASA Astrophysics Data System (ADS)

    Padmanabhan, S.; Chandrasekaran, M.; Ganesan, S.; patan, Mahamed Naveed Khan; Navakanth, Polina

    2017-03-01

    An Engineering optimization leads a essential role in several engineering application areas like process design, product design, re-engineering and new product development, etc. In engineering, an awfully best answer is achieved by comparison to some completely different solutions by utilization previous downside information. An optimization algorithms provide systematic associate degreed economical ways that within which of constructing and comparison new design solutions so on understand at best vogue, thus on best solution efficiency and acquire the foremost wonderful design impact. In this paper, a new evolutionary based Modified Artificial Immune System (MAIS) algorithm used to optimize an engineering application of gear drive design. The results are compared with existing design.

  18. An optimized implementation of a fault-tolerant clock synchronization circuit

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    1995-01-01

    A fault-tolerant clock synchronization circuit was designed and tested. A comparison to a previous design and the procedure followed to achieve the current optimization are included. The report also includes a description of the system and the results of tests performed to study the synchronization and fault-tolerant characteristics of the implementation.

  19. Integrated Arrival and Departure Schedule Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2014-01-01

    In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.

  20. Optimized Periocular Template Selection for Human Recognition

    PubMed Central

    Sa, Pankaj K.; Majhi, Banshidhar

    2013-01-01

    A novel approach for selecting a rectangular template around periocular region optimally potential for human recognition is proposed. A comparatively larger template of periocular image than the optimal one can be slightly more potent for recognition, but the larger template heavily slows down the biometric system by making feature extraction computationally intensive and increasing the database size. A smaller template, on the contrary, cannot yield desirable recognition though the smaller template performs faster due to low computation for feature extraction. These two contradictory objectives (namely, (a) to minimize the size of periocular template and (b) to maximize the recognition through the template) are aimed to be optimized through the proposed research. This paper proposes four different approaches for dynamic optimal template selection from periocular region. The proposed methods are tested on publicly available unconstrained UBIRISv2 and FERET databases and satisfactory results have been achieved. Thus obtained template can be used for recognition of individuals in an organization and can be generalized to recognize every citizen of a nation. PMID:23984370

  1. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  2. Structural damage detection-oriented multi-type sensor placement with multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong

    2018-05-01

    A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.

  3. Achieving Optimal Self-Adaptivity for Dynamic Tuning of Organic Semiconductors through Resonance Engineering.

    PubMed

    Tao, Ye; Xu, Lijia; Zhang, Zhen; Chen, Runfeng; Li, Huanhuan; Xu, Hui; Zheng, Chao; Huang, Wei

    2016-08-03

    Current static-state explorations of organic semiconductors for optimal material properties and device performance are hindered by limited insights into the dynamically changed molecular states and charge transport and energy transfer processes upon device operation. Here, we propose a simple yet successful strategy, resonance variation-based dynamic adaptation (RVDA), to realize optimized self-adaptive properties in donor-resonance-acceptor molecules by engineering the resonance variation for dynamic tuning of organic semiconductors. Organic light-emitting diodes hosted by these RVDA materials exhibit remarkably high performance, with external quantum efficiencies up to 21.7% and favorable device stability. Our approach, which supports simultaneous realization of dynamically adapted and selectively enhanced properties via resonance engineering, illustrates a feasible design map for the preparation of smart organic semiconductors capable of dynamic structure and property modulations, promoting the studies of organic electronics from static to dynamic.

  4. Sootblowing optimization for improved boiler performance

    DOEpatents

    James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J.

    2012-12-25

    A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.

  5. Sootblowing optimization for improved boiler performance

    DOEpatents

    James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J

    2013-07-30

    A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.

  6. Integrated structure/control law design by multilevel optimization

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.; Schmidt, David K.

    1989-01-01

    A new approach to integrated structure/control law design based on multilevel optimization is presented. This new approach is applicable to aircraft and spacecraft and allows for the independent design of the structure and control law. Integration of the designs is achieved through use of an upper level coordination problem formulation within the multilevel optimization framework. The method requires the use of structure and control law design sensitivity information. A general multilevel structure/control law design problem formulation is given, and the use of Linear Quadratic Gaussian (LQG) control law design and design sensitivity methods within the formulation is illustrated. Results of three simple integrated structure/control law design examples are presented. These results show the capability of structure and control law design tradeoffs to improve controlled system performance within the multilevel approach.

  7. Optimal Clinical Doses of Faropenem, Linezolid, and Moxifloxacin in Children With Disseminated Tuberculosis: Goldilocks

    PubMed Central

    Srivastava, Shashikant; Deshpande, Devyani; Pasipanodya, Jotam; Nuermberger, Eric; Swaminathan, Soumya; Gumbo, Tawanda

    2016-01-01

    Background. When treated with the same antibiotic dose, children achieve different 0- to 24-hour area under the concentration-time curves (AUC0–24) because of maturation and between-child physiological variability on drug clearance. Children are also infected by Mycobacterium tuberculosis isolates with different antibiotic minimum inhibitory concentrations (MICs). Thus, each child will achieve different AUC0–24/MIC ratios when treated with the same dose. Methods. We used 10 000-subject Monte Carlo experiments to identify the oral doses of linezolid, moxifloxacin, and faropenem that would achieve optimal target exposures associated with optimal efficacy in children with disseminated tuberculosis. The linezolid and moxifloxacin exposure targets were AUC0–24/MIC ratios of 62 and 122, and a faropenem percentage of time above MIC >60%, in combination therapy. A linezolid AUC0–24 of 93.4 mg × hour/L was target for toxicity. Population pharmacokinetic parameters of each drug and between-child variability, as well as MIC distribution, were used, and the cumulative fraction of response (CFR) was calculated. We also considered drug penetration indices into meninges, bone, and peritoneum. Results. The linezolid dose of 15 mg/kg in full-term neonates and infants aged up to 3 months and 10 mg/kg in toddlers, administered once daily, achieved CFR ≥ 90%, with <10% achieving linezolid AUC0–24 associated with toxicity. The moxifloxacin dose of 25 mg/kg/day achieved a CFR > 90% in infants, but the optimal dose was 20 mg/kg/day in older children. The faropenem medoxomil optimal dosage was 30 mg/kg 3–4 times daily. Conclusions. The regimen and doses of linezolid, moxifloxacin, and faropenem identified are proposed to be adequate for all disseminated tuberculosis syndromes, whether drug-resistant or -susceptible. PMID:27742641

  8. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  9. Optimal Control Modification for Robust Adaptation of Singularly Perturbed Systems with Slow Actuators

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham; Stepanyan, Vahram; Boskovic, Jovan

    2009-01-01

    Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. The model matching conditions in the transformed time coordinate results in increase in the feedback gain and modification of the adaptive law.

  10. Optimized quantum sensing with a single electron spin using real-time adaptive measurements.

    PubMed

    Bonato, C; Blok, M S; Dinani, H T; Berry, D W; Markham, M L; Twitchen, D J; Hanson, R

    2016-03-01

    Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz(-1/2) over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.

  11. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  12. Design and Optimization Method of a Two-Disk Rotor System

    NASA Astrophysics Data System (ADS)

    Huang, Jingjing; Zheng, Longxi; Mei, Qing

    2016-04-01

    An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.

  13. Optimization of personalized therapies for anticancer treatment.

    PubMed

    Vazquez, Alexei

    2013-04-12

    As today, there are hundreds of targeted therapies for the treatment of cancer, many of which have companion biomarkers that are in use to inform treatment decisions. If we would consider this whole arsenal of targeted therapies as a treatment option for every patient, very soon we will reach a scenario where each patient is positive for several markers suggesting their treatment with several targeted therapies. Given the documented side effects of anticancer drugs, it is clear that such a strategy is unfeasible. Here, we propose a strategy that optimizes the design of combinatorial therapies to achieve the best response rates with the minimal toxicity. In this methodology markers are assigned to drugs such that we achieve a high overall response rate while using personalized combinations of minimal size. We tested this methodology in an in silico cancer patient cohort, constructed from in vitro data for 714 cell lines and 138 drugs reported by the Sanger Institute. Our analysis indicates that, even in the context of personalized medicine, combinations of three or more drugs are required to achieve high response rates. Furthermore, patient-to-patient variations in pharmacokinetics have a significant impact in the overall response rate. A 10 fold increase in the pharmacokinetics variations resulted in a significant drop the overall response rate. The design of optimal combinatorial therapy for anticancer treatment requires a transition from the one-drug/one-biomarker approach to global strategies that simultaneously assign makers to a catalog of drugs. The methodology reported here provides a framework to achieve this transition.

  14. Optimization of Straight Cylindrical Turning Using Artificial Bee Colony (ABC) Algorithm

    NASA Astrophysics Data System (ADS)

    Prasanth, Rajanampalli Seshasai Srinivasa; Hans Raj, Kandikonda

    2017-04-01

    Artificial bee colony (ABC) algorithm, that mimics the intelligent foraging behavior of honey bees, is increasingly gaining acceptance in the field of process optimization, as it is capable of handling nonlinearity, complexity and uncertainty. Straight cylindrical turning is a complex and nonlinear machining process which involves the selection of appropriate cutting parameters that affect the quality of the workpiece. This paper presents the estimation of optimal cutting parameters of the straight cylindrical turning process using the ABC algorithm. The ABC algorithm is first tested on four benchmark problems of numerical optimization and its performance is compared with genetic algorithm (GA) and ant colony optimization (ACO) algorithm. Results indicate that, the rate of convergence of ABC algorithm is better than GA and ACO. Then, the ABC algorithm is used to predict optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool nose radius to achieve good surface finish. Results indicate that, the ABC algorithm estimated a comparable surface finish when compared with real coded genetic algorithm and differential evolution algorithm.

  15. Engineering applications of heuristic multilevel optimization methods

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.

    1988-01-01

    Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.

  16. Engineering applications of heuristic multilevel optimization methods

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.

    1989-01-01

    Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.

  17. Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing

    NASA Astrophysics Data System (ADS)

    Srivastava, Praveen Ranjan; Pareek, Deepak

    Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.

  18. Concurrent micromechanical tailoring and fabrication process optimization for metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Morel, M.; Saravanos, D. A.; Chamis, Christos C.

    1990-01-01

    A method is presented to minimize the residual matrix stresses in metal matrix composites. Fabrication parameters such as temperature and consolidation pressure are optimized concurrently with the characteristics (i.e., modulus, coefficient of thermal expansion, strength, and interphase thickness) of a fiber-matrix interphase. By including the interphase properties in the fabrication process, lower residual stresses are achievable. Results for an ultra-high modulus graphite (P100)/copper composite show a reduction of 21 percent for the maximum matrix microstress when optimizing the fabrication process alone. Concurrent optimization of the fabrication process and interphase properties show a 41 percent decrease in the maximum microstress. Therefore, this optimization method demonstrates the capability of reducing residual microstresses by altering the temperature and consolidation pressure histories and tailoring the interphase properties for an improved composite material. In addition, the results indicate that the consolidation pressures are the most important fabrication parameters, and the coefficient of thermal expansion is the most critical interphase property.

  19. Optimal leveling of flow over one-dimensional topography by Marangoni stresses

    NASA Astrophysics Data System (ADS)

    Gramlich, C. M.; Homsy, G. M.; Kalliadasis, Serafim

    2001-11-01

    A thin viscous film flowing over a step down in topography exhibits a capillary ridge near the step, which may be undesirable in applications. This paper investigates optimal leveling of the ridge by means of a Marangoni stress such as might be produced by a localized heater creating temperature variations at the film surface. Lubrication theory results in a differential equation for the free surface, which can be solved numerically for any given topography and temperature profile. Leveling the ridge is then formulated as an optimization problem to minimize the maximum free-surface height by varying the heater strength, position, and width. Optimized heaters with 'top-hat' or parabolic temperature profiles replace the original ridge with two smaller ridges of equal size, achieving leveling of better than 50%. An optimized asymmetric n-step temperature distribution results in (n+1) ridges and reduces the variation in surface height by a factor of better than 1/(n+1).

  20. Electrochemical model based charge optimization for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Pramanik, Sourav; Anwar, Sohel

    2016-05-01

    In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.

  1. A Hybrid Interval–Robust Optimization Model for Water Quality Management

    PubMed Central

    Xu, Jieyu; Li, Yongping; Huang, Guohe

    2013-01-01

    Abstract In water quality management problems, uncertainties may exist in many system components and pollution-related processes (i.e., random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval–robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements. PMID:23922495

  2. A Hybrid Interval-Robust Optimization Model for Water Quality Management.

    PubMed

    Xu, Jieyu; Li, Yongping; Huang, Guohe

    2013-05-01

    In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.

  3. Optimal shapes of surface-slip driven self-propelled swimmers

    NASA Astrophysics Data System (ADS)

    Vilfan, Andrej; Osterman, Natan

    2012-11-01

    If one defines the swimming efficiency of a microorganism as the power needed to move it against viscous drag, divided by the total dissipated power, one usually finds values no better than 1%. In order to find out how close this is to the theoretically achievable optimum, we first introduced a new efficiency measure at the level of a single cilium or an infinite ciliated surface and numerically determined the optimal beating patterns according to this criterion. In the following we also determined the optimal shape of a swimmer such that the total power is minimal while maintaining the volume and the swimming speed. The resulting shape depends strongly on the allowed maximum curvature. When sufficient curvature is allowed the optimal swimmer exhibits two protrusions along the symmetry axis. The results show that prolate swimmers such as Paramecium have an efficiency that is ~ 20% higher than that of a spherical body, whereas some microorganisms have shapes that allow even higher efficiency.

  4. Achievement of optimal medical therapy goals for U.S. adults with coronary artery disease: results from the REGARDS Study (REasons for Geographic And Racial Differences in Stroke).

    PubMed

    Brown, Todd M; Voeks, Jenifer H; Bittner, Vera; Brenner, David A; Cushman, Mary; Goff, David C; Glasser, Stephen; Muntner, Paul; Tabereaux, Paul B; Safford, Monika M

    2014-04-29

    In a nonclinical trial setting, we sought to determine the proportion of individuals with coronary artery disease (CAD) with optimal risk factor levels based on the COURAGE (Clinical Outcomes Utilizing Revascularization and Aggressive DruG Evaluation) trial. In the COURAGE trial, the addition of percutaneous coronary intervention (PCI) to optimal medical therapy did not reduce the risk of death or myocardial infarction in stable CAD patients but resulted in more revascularization procedures. The REGARDS (REasons for Geographic And Racial Differences in Stroke) study is a national prospective cohort study of 30,239 African-American and white community-dwelling individuals older than 45 years of age who enrolled in 2003 through 2007. We calculated the proportion of 3,167 participants with self-reported CAD meeting 7 risk factor goals based on the COURAGE trial: 1) aspirin use; 2) systolic blood pressure <130 mm Hg and diastolic blood pressure <85 mm Hg (<80 mm Hg if diabetic); 3) low-density lipoprotein cholesterol <85 mg/dl, high-density lipoprotein cholesterol >40 mg/dl, and triglycerides <150 mg/dl; 4) fasting glucose <126 mg/dl; 5) nonsmoking status; 6) body mass index <25 kg/m(2); and 7) exercise ≥4 days per week. The mean age of participants was 69 ± 9 years; 33% were African American and 35% were female. Overall, the median number of goals met was 4. Less than one-fourth met ≥5 of the 7 goals, and 16% met all 3 goals for aspirin, blood pressure, and low-density lipoprotein cholesterol. Older age, white race, higher income, more education, and higher physical functioning were independently associated with meeting more goals. There is substantial room for improvement in risk factor reduction among U.S. individuals with CAD. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  5. Broadband All-angle Negative Refraction by Optimized Phononic Crystals.

    PubMed

    Li, Yang Fan; Meng, Fei; Zhou, Shiwei; Lu, Ming-Hui; Huang, Xiaodong

    2017-08-07

    All-angle negative refraction (AANR) of phononic crystals and its frequency range are dependent on mechanical properties of constituent materials and their spatial distribution. So far, it is impossible to achieve the maximum operation frequency range of AANR theoretically. In this paper, we will present a numerical approach for designing a two-dimensional phononic crystal with broadband AANR without negative index. Through analyzing the mechanism of AANR, a topology optimization problem aiming at broadband AANR is established and solved by bi-directional evolutionary structural optimization method. The optimal steel/air phononic crystal exhibits a record AANR range over 20% and its refractive properties and focusing effects are further investigated. The results demonstrate the multifunctionality of a flat phononic slab including superlensing effect near upper AANR frequencies and self-collimation at lower AANR frequencies.

  6. Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method

    NASA Astrophysics Data System (ADS)

    Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen

    2008-03-01

    The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.

  7. Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2009-01-01

    An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.

  8. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  9. Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.

    Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less

  10. High-risk population health management--achieving improved patient outcomes and near-term financial results.

    PubMed

    Lynch, J P; Forman, S A; Graff, S; Gunby, M C

    2000-07-01

    A managed care organization sought to achieve efficiencies in care delivery and cost savings by anticipating and better caring for its frail and least stable members. Time sequence case study of program intervention across an entire managed care population in its first year compared with the prior baseline year. Key attributes of the intervention included predictive registries of at-risk members based on existing data, relentless focus on the high-risk group, an integrated clinical and psychosocial approach to assessments and are planning, a reengineered care management process, secured Internet applications enabling rapid implementation and broad connectivity, and population-based outcomes metrics derived from widely used measures of resource utilization and functional status. Concentrating on the highest-risk group, which averaged just 1.1% prevalence in the total membership, yielded bottom line results. When the year before program implementation (July 1997 through June 1998) was compared with the subsequent year, the total population's annualized commercial admission rate was reduced 5.3%, and seniors' was reduced 3.0%. A claims-paid analysis exclusively of the highest-risk group revealed that their efficiencies and savings overwhelmingly contributed to the membershipwide effect. This subgroup's costs dropped 35.7% from preprogram levels of $2590 per member per month (excluding pharmaceuticals). During the same time, patient-derived cross-sectional functional status rose 12.5%. A sharply focused, Internet-deployed case management strategy achieved economic and functional status results on a population basis and produced systemwide savings in its first year of implementation.

  11. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  12. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  13. Automatic Summarization as a Combinatorial Optimization Problem

    NASA Astrophysics Data System (ADS)

    Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki

    We derived the oracle summary with the highest ROUGE score that can be achieved by integrating sentence extraction with sentence compression from the reference abstract. The analysis results of the oracle revealed that summarization systems have to assign an appropriate compression rate for each sentence in the document. In accordance with this observation, this paper proposes a summarization method as a combinatorial optimization: selecting the set of sentences that maximize the sum of the sentence scores from the pool which consists of the sentences with various compression rates, subject to length constrains. The score of the sentence is defined by its compression rate, content words and positional information. The parameters for the compression rates and positional information are optimized by minimizing the loss between score of oracles and that of candidates. The results obtained from TSC-2 corpus showed that our method outperformed the previous systems with statistical significance.

  14. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  15. Software for Optimizing Plans Involving Interdependent Goals

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Gaines, Daniel; Rabideau, Gregg

    2005-01-01

    A computer program enables construction and optimization of plans for activities that are directed toward achievement of goals that are interdependent. Goal interdependence is defined as the achievement of one or more goals affecting the desirability or priority of achieving one or more other goals. This program is overlaid on the Automated Scheduling and Planning Environment (ASPEN) software system, aspects of which have been described in a number of prior NASA Tech Briefs articles. Unlike other known or related planning programs, this program considers interdependences among goals that can change between problems and provides a language for easily specifying such dependences. Specifications of the interdependences can be formulated dynamically and provided to the associated planning software as part of the goal input. Then an optimization algorithm provided by this program enables the planning software to reason about the interdependences and incorporate them into an overall objective function that it uses to rate the quality of a plan under construction and to direct its optimization search. In tests on a series of problems of planning geological experiments by a team of instrumented robotic vehicles (rovers) on new terrain, this program was found to enhance plan quality.

  16. Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon

    2004-01-01

    This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.

  17. Optimal Control for Fast and Robust Generation of Entangled States in Anisotropic Heisenberg Chains

    NASA Astrophysics Data System (ADS)

    Zhang, Xiong-Peng; Shao, Bin; Zou, Jian

    2017-05-01

    Motivated by some recent results of the optimal control (OC) theory, we study anisotropic XXZ Heisenberg spin-1/2 chains with control fields acting on a single spin, with the aim of exploring how maximally entangled state can be prepared. To achieve the goal, we use a numerical optimization algorithm (e.g., the Krotov algorithm, which was shown to be capable of reaching the quantum speed limit) to search an optimal set of control parameters, and then obtain OC pulses corresponding to the target fidelity. We find that the minimum time for implementing our target state depending on the anisotropy parameter Δ of the model. Finally, we analyze the robustness of the obtained results for the optimal fidelities and the effectiveness of the Krotov method under some realistic conditions.

  18. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  19. Optimized plasma actuation on asymmetric vortex over a slender body

    NASA Astrophysics Data System (ADS)

    Long, Yuexiao; Li, Huaxing; Meng, Xuanshi; Hu, Haiyang

    2018-01-01

    Detailed particle-image-velocimetry and surface pressure measurements are conducted to study asymmetric vortex control over a slender body at high angles of attack by using a pair of optimized alternating current surface-dielectric-barrier discharge plasma actuators. The Reynolds number based on the base diameter of the model is ReD = 3.8 × 105. Steady and duty-cycle manipulations are employed. The results demonstrate the effectiveness of the optimized actuator with a thick Teflon barrier at a high free-stream speed. Perfect linear proportional control is also achieved under duty-cycle control with a reduced frequency of f+ = 0.17.

  20. Optimization of Aspergillus niger rock phosphate solubilization in solid-state fermentation and use of the resulting product as a P fertilizer

    PubMed Central

    Mendes, Gilberto de Oliveira; da Silva, Nina Morena Rêgo Muniz; Anastácio, Thalita Cardoso; Vassilev, Nikolay Bojkov; Ribeiro, José Ivo; da Silva, Ivo Ribeiro; Costa, Maurício Dutra

    2015-01-01

    A biotechnological strategy for the production of an alternative P fertilizer is described in this work. The fertilizer was produced through rock phosphate (RP) solubilization by Aspergillus niger in a solid-state fermentation (SSF) with sugarcane bagasse as substrate. SSF conditions were optimized by the surface response methodology after an initial screening of factors with significant effect on RP solubilization. The optimized levels of the factors were 865 mg of biochar, 250 mg of RP, 270 mg of sucrose and 6.2 ml of water per gram of bagasse. At this optimal setting, 8.6 mg of water-soluble P per gram of bagasse was achieved, representing an increase of 2.4 times over the non-optimized condition. The optimized SSF product was partially incinerated at 350°C (SB-350) and 500°C (SB-500) to reduce its volume and, consequently, increase P concentration. The post-processed formulations of the SSF product were evaluated in a soil–plant experiment. The formulations SB-350 and SB-500 increased the growth and P uptake of common bean plants (Phaseolus vulgaris L.) when compared with the non-treated RP. Furthermore, these two formulations had a yield relative to triple superphosphate of 60% (on a dry mass basis). Besides increasing P concentration, incineration improved the SSF product performance probably by decreasing microbial immobilization of nutrients during the decomposition of the remaining SSF substrate. The process proposed is a promising alternative for the management of P fertilization since it enables the utilization of low-solubility RPs and relies on the use of inexpensive materials. PMID:26112323

  1. Optimal satisfaction degree in energy harvesting cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Li, Zan; Liu, Bo-Yang; Si, Jiang-Bo; Zhou, Fu-Hui

    2015-12-01

    A cognitive radio (CR) network with energy harvesting (EH) is considered to improve both spectrum efficiency and energy efficiency. A hidden Markov model (HMM) is used to characterize the imperfect spectrum sensing process. In order to maximize the whole satisfaction degree (WSD) of the cognitive radio network, a tradeoff between the average throughput of the secondary user (SU) and the interference to the primary user (PU) is analyzed. We formulate the satisfaction degree optimization problem as a mixed integer nonlinear programming (MINLP) problem. The satisfaction degree optimization problem is solved by using differential evolution (DE) algorithm. The proposed optimization problem allows the network to adaptively achieve the optimal solution based on its required quality of service (Qos). Numerical results are given to verify our analysis. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the Doctorial Programs Foundation of the Ministry of Education of China (Grant No. 20110203110011), and the 111 Project (Grant No. B08038).

  2. Aerostructural optimization of a morphing wing for airborne wind energy applications

    NASA Astrophysics Data System (ADS)

    Fasel, U.; Keidel, D.; Molinari, G.; Ermanni, P.

    2017-09-01

    Airborne wind energy (AWE) vehicles maximize energy production by constantly operating at extreme wing loading, permitted by high flight speeds. Additionally, the wide range of wind speeds and the presence of flow inhomogeneities and gusts create a complex and demanding flight environment for AWE systems. Adaptation to different flow conditions is normally achieved by conventional wing control surfaces and, in case of ground generator-based systems, by varying the reel-out speed. These control degrees of freedom enable to remain within the operational envelope, but cause significant penalties in terms of energy output. A significantly greater adaptability is offered by shape-morphing wings, which have the potential to achieve optimal performance at different flight conditions by tailoring their airfoil shape and lift distribution at different levels along the wingspan. Hence, the application of compliant structures for AWE wings is very promising. Furthermore, active gust load alleviation can be achieved through morphing, which leads to a lower weight and an expanded flight envelope, thus increasing the power production of the AWE system. This work presents a procedure to concurrently optimize the aerodynamic shape, compliant structure, and composite layup of a morphing wing for AWE applications. The morphing concept is based on distributed compliance ribs, actuated by electromechanical linear actuators, guiding the deformation of the flexible—yet load-carrying—composite skin. The goal of the aerostructural optimization is formulated as a high-level requirement, namely to maximize the average annual power production per wing area of an AWE system by tailoring the shape of the wing, and to extend the flight envelope of the wing by actively alleviating gust loads. The results of the concurrent multidisciplinary optimization show a 50.7% increase of extracted power with respect to a sequentially optimized design, highlighting the benefits of morphing and the

  3. Derivative Trade Optimizing Model Utilizing GP Based on Behavioral Finance Theory

    NASA Astrophysics Data System (ADS)

    Matsumura, Koki; Kawamoto, Masaru

    This paper proposed a new technique which makes the strategy trees for the derivative (option) trading investment decision based on the behavioral finance theory and optimizes it using evolutionary computation, in order to achieve high profitability. The strategy tree uses a technical analysis based on a statistical, experienced technique for the investment decision. The trading model is represented by various technical indexes, and the strategy tree is optimized by the genetic programming(GP) which is one of the evolutionary computations. Moreover, this paper proposed a method using the prospect theory based on the behavioral finance theory to set psychological bias for profit and deficit and attempted to select the appropriate strike price of option for the higher investment efficiency. As a result, this technique produced a good result and found the effectiveness of this trading model by the optimized dealings strategy.

  4. Improving Achievement in Low-Performing Schools: Key Results for School Leaders

    ERIC Educational Resources Information Center

    Ward, Randolph E.; Burke, Mary Ann

    2004-01-01

    As accountability in schools becomes more crucial, educators are looking for comprehensive and innovative management practices that respond to challenges and realities of student academic achievement. In order to improve academic performance and the quality of instruction, the entire school community needs to be involved. This book provides six…

  5. Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites.

    PubMed

    Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong

    2018-03-13

    Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59-60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties.

  6. Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites

    PubMed Central

    Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong

    2018-01-01

    Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59–60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties. PMID:29534048

  7. Optimization of the nitrification process of wastewater resulting from cassava starch production.

    PubMed

    Fleck, Leandro; Ferreira Tavares, Maria Hermínia; Eyng, Eduardo; Orssatto, Fabio

    2018-05-14

    The present study has the objective of optimizing operational conditions of an aerated reactor applied to the removal of ammoniacal nitrogen from wastewater resulting from the production of cassava starch. An aerated reactor with a usable volume of 4 L and aeration control by rotameter was used. The airflow and cycle time parameters were controlled and their effects on the removal of ammoniacal nitrogen and the conversion to nitrate were evaluated. The highest ammoniacal nitrogen removal, of 96.62%, occurred under conditions of 24 h and 0.15 L min -1 L reactor -1 . The highest nitrate conversion, of 24.81%, occurred under conditions of 40.92 h and 0.15 L min -1  L reactor -1 . The remaining value of ammoniacal nitrogen was converted primarily into nitrite, energy, hydrogen and water. The optimal operational values of the aerated reactor are 29.25 h and 0.22 L min -1  L reactor -1 . The mathematical models representative of the process satisfactorily describe ammoniacal nitrogen removal efficiency and nitrate conversion, presenting errors of 2.87% and 3.70%, respectively.

  8. Parental Warmth, Control, and Involvement in Schooling: Predicting Academic Achievement among Korean American Adolescents.

    ERIC Educational Resources Information Center

    Kim, Kyoungho; Rohner, Ronald P.

    2002-01-01

    Explored the relationship between parenting style and academic achievement of Korean American adolescents, investigating the influence of perceived parental warmth and control and improvement in schooling. Survey data indicated that authoritative paternal parenting related to optimal academic achievement. Differences in maternal parenting styles…

  9. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  10. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  11. A Simulation Optimization Approach to Epidemic Forecasting

    PubMed Central

    Nsoesie, Elaine O.; Beckman, Richard J.; Shashaani, Sara; Nagaraj, Kalyani S.; Marathe, Madhav V.

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222

  12. A Simulation Optimization Approach to Epidemic Forecasting.

    PubMed

    Nsoesie, Elaine O; Beckman, Richard J; Shashaani, Sara; Nagaraj, Kalyani S; Marathe, Madhav V

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area.

  13. Optimizing a Query by Transformation and Expansion.

    PubMed

    Glocker, Katrin; Knurr, Alexander; Dieter, Julia; Dominick, Friederike; Forche, Melanie; Koch, Christian; Pascoe Pérez, Analie; Roth, Benjamin; Ückert, Frank

    2017-01-01

    In the biomedical sector not only the amount of information produced and uploaded into the web is enormous, but also the number of sources where these data can be found. Clinicians and researchers spend huge amounts of time on trying to access this information and to filter the most important answers to a given question. As the formulation of these queries is crucial, automated query expansion is an effective tool to optimize a query and receive the best possible results. In this paper we introduce the concept of a workflow for an optimization of queries in the medical and biological sector by using a series of tools for expansion and transformation of the query. After the definition of attributes by the user, the query string is compared to previous queries in order to add semantic co-occurring terms to the query. Additionally, the query is enlarged by an inclusion of synonyms. The translation into database specific ontologies ensures the optimal query formulation for the chosen database(s). As this process can be performed in various databases at once, the results are ranked and normalized in order to achieve a comparable list of answers for a question.

  14. MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, A

    2016-06-15

    Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less

  15. Optimization methods for activities selection problems

    NASA Astrophysics Data System (ADS)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  16. A Multiobjective Optimization Framework for Online Stochastic Optimal Control in Hybrid Electric Vehicles

    DOE PAGES

    Malikopoulos, Andreas

    2015-01-01

    The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less

  17. Optimizing Requirements Decisions with KEYS

    NASA Technical Reports Server (NTRS)

    Jalali, Omid; Menzies, Tim; Feather, Martin

    2008-01-01

    Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.

  18. A practical globalization of one-shot optimization for optimal design of tokamak divertors

    NASA Astrophysics Data System (ADS)

    Blommaert, Maarten; Dekeyser, Wouter; Baelmans, Martine; Gauger, Nicolas R.; Reiter, Detlev

    2017-01-01

    In past studies, nested optimization methods were successfully applied to design of the magnetic divertor configuration in nuclear fusion reactors. In this paper, so-called one-shot optimization methods are pursued. Due to convergence issues, a globalization strategy for the one-shot solver is sought. Whereas Griewank introduced a globalization strategy using a doubly augmented Lagrangian function that includes primal and adjoint residuals, its practical usability is limited by the necessity of second order derivatives and expensive line search iterations. In this paper, a practical alternative is offered that avoids these drawbacks by using a regular augmented Lagrangian merit function that penalizes only state residuals. Additionally, robust rank-two Hessian estimation is achieved by adaptation of Powell's damped BFGS update rule. The application of the novel one-shot approach to magnetic divertor design is considered in detail. For this purpose, the approach is adapted to be complementary with practical in parts adjoint sensitivities. Using the globalization strategy, stable convergence of the one-shot approach is achieved.

  19. Deterministic methods for multi-control fuel loading optimization

    NASA Astrophysics Data System (ADS)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  20. Experimental optimization of directed field ionization

    NASA Astrophysics Data System (ADS)

    Liu, Zhimin Cheryl; Gregoric, Vincent C.; Carroll, Thomas J.; Noel, Michael W.

    2017-04-01

    The state distribution of an ensemble of Rydberg atoms is commonly measured using selective field ionization. The resulting time resolved ionization signal from a single energy eigenstate tends to spread out due to the multiple avoided Stark level crossings atoms must traverse on the way to ionization. The shape of the ionization signal can be modified by adding a perturbation field to the main field ramp. Here, we present experimental results of the manipulation of the ionization signal using a genetic algorithm. We address how both the genetic algorithm and the experimental parameters were adjusted to achieve an optimized result. This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377.

  1. Why Evaluations Fail: To Achieve Meaningful Results, Address These Common Challenges

    ERIC Educational Resources Information Center

    Killion, Joellen

    2017-01-01

    Evaluation of professional learning illuminates the interactions that occur in the implementation of planned learning experiences and the necessary supports designed to improve professional practice and its effects on students. It investigates how a set of actions designed to achieve defined short- and long-term outcomes occur over time and how…

  2. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  3. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  4. Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid

    2008-02-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  5. Lattice Boltzmann simulation optimization on leading multicore platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, S.; Carter, J.; Oliker, L.

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of searchbased performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our autotuned LBMHD application achieves up to a 14 improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  6. Exploring the complexity of quantum control optimization trajectories.

    PubMed

    Nanduri, Arun; Shir, Ofer M; Donovan, Ashley; Ho, Tak-San; Rabitz, Herschel

    2015-01-07

    The control of quantum system dynamics is generally performed by seeking a suitable applied field. The physical objective as a functional of the field forms the quantum control landscape, whose topology, under certain conditions, has been shown to contain no critical point suboptimal traps, thereby enabling effective searches for fields that give the global maximum of the objective. This paper addresses the structure of the landscape as a complement to topological critical point features. Recent work showed that landscape structure is highly favorable for optimization of state-to-state transition probabilities, in that gradient-based control trajectories to the global maximum value are nearly straight paths. The landscape structure is codified in the metric R ≥ 1.0, defined as the ratio of the length of the control trajectory to the Euclidean distance between the initial and optimal controls. A value of R = 1 would indicate an exactly straight trajectory to the optimal observable value. This paper extends the state-to-state transition probability results to the quantum ensemble and unitary transformation control landscapes. Again, nearly straight trajectories predominate, and we demonstrate that R can take values approaching 1.0 with high precision. However, the interplay of optimization trajectories with critical saddle submanifolds is found to influence landscape structure. A fundamental relationship necessary for perfectly straight gradient-based control trajectories is derived, wherein the gradient on the quantum control landscape must be an eigenfunction of the Hessian. This relation is an indicator of landscape structure and may provide a means to identify physical conditions when control trajectories can achieve perfect linearity. The collective favorable landscape topology and structure provide a foundation to understand why optimal quantum control can be readily achieved.

  7. A rotor optimization using regression analysis

    NASA Technical Reports Server (NTRS)

    Giansante, N.

    1984-01-01

    The design and development of helicopter rotors is subject to the many design variables and their interactions that effect rotor operation. Until recently, selection of rotor design variables to achieve specified rotor operational qualities has been a costly, time consuming, repetitive task. For the past several years, Kaman Aerospace Corporation has successfully applied multiple linear regression analysis, coupled with optimization and sensitivity procedures, in the analytical design of rotor systems. It is concluded that approximating equations can be developed rapidly for a multiplicity of objective and constraint functions and optimizations can be performed in a rapid and cost effective manner; the number and/or range of design variables can be increased by expanding the data base and developing approximating functions to reflect the expanded design space; the order of the approximating equations can be expanded easily to improve correlation between analyzer results and the approximating equations; gradients of the approximating equations can be calculated easily and these gradients are smooth functions reducing the risk of numerical problems in the optimization; the use of approximating functions allows the problem to be started easily and rapidly from various initial designs to enhance the probability of finding a global optimum; and the approximating equations are independent of the analysis or optimization codes used.

  8. Modelling and Optimization of Polycaprolactone Ultrafine-Fibres Electrospinning Process Using Response Surface Methodology

    PubMed Central

    Ruys, Andrew J.

    2018-01-01

    Electrospun fibres have gained broad interest in biomedical applications, including tissue engineering scaffolds, due to their potential in mimicking extracellular matrix and producing structures favourable for cell and tissue growth. The development of scaffolds often involves multivariate production parameters and multiple output characteristics to define product quality. In this study on electrospinning of polycaprolactone (PCL), response surface methodology (RSM) was applied to investigate the determining parameters and find optimal settings to achieve the desired properties of fibrous scaffold for acetabular labrum implant. The results showed that solution concentration influenced fibre diameter, while elastic modulus was determined by solution concentration, flow rate, temperature, collector rotation speed, and interaction between concentration and temperature. Relationships between these variables and outputs were modelled, followed by an optimization procedure. Using the optimized setting (solution concentration of 10% w/v, flow rate of 4.5 mL/h, temperature of 45 °C, and collector rotation speed of 1500 RPM), a target elastic modulus of 25 MPa could be achieved at a minimum possible fibre diameter (1.39 ± 0.20 µm). This work demonstrated that multivariate factors of production parameters and multiple responses can be investigated, modelled, and optimized using RSM. PMID:29562614

  9. Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics

    NASA Astrophysics Data System (ADS)

    Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu

    2016-01-01

    An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.

  10. Joint optimization of a partially coherent Gaussian beam for free-space optical communication over turbulent channels with pointing errors.

    PubMed

    Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali

    2013-02-01

    Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.

  11. Finite grade pheromone ant colony optimization for image segmentation

    NASA Astrophysics Data System (ADS)

    Yuanjing, F.; Li, Y.; Liangjun, K.

    2008-06-01

    By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.

  12. Generic Community System Specification: A Proposed Format for Reporting the Results of Microgrid Optimization Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, Antonio

    This document provides a proposed format for reporting the results of microgrid optimization analysis. While the proposed format assumes that the modeling is conducted as part of a renewable energy retrofit of an existing diesel micro-grid, the format can certainly be adopted for other situations.

  13. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  14. Pathway optimization by re-design of untranslated regions for L-tyrosine production in Escherichia coli

    PubMed Central

    Cheol Kim, Seong; Eun Min, Byung; Gyu Hwang, Hyun; Woo Seo, Sang; Yeol Jung, Gyoo

    2015-01-01

    L-tyrosine is a commercially important compound in the food, pharmaceutical, chemical, and cosmetic industries. Although several attempts have been made to improve L-tyrosine production, translation-level expression control and carbon flux rebalancing around phosphoenolpyruvate (PEP) node still remain to be achieved for optimizing the pathway. Here, we demonstrate pathway optimization by altering gene expression levels for L-tyrosine production in Escherichia coli. To optimize the L-tyrosine biosynthetic pathway, a synthetic constitutive promoter and a synthetic 5′-untranslated region (5′-UTR) were introduced for each gene of interest to allow for control at both transcription and translation levels. Carbon flux rebalancing was achieved by controlling the expression level of PEP synthetase using UTR Designer. The L-tyrosine productivity of the engineered E. coli strain was increased through pathway optimization resulting in 3.0 g/L of L-tyrosine titer, 0.0354 g L-tyrosine/h/g DCW of productivity, and 0.102 g L-tyrosine/g glucose yield. Thus, this work demonstrates that pathway optimization by 5′-UTR redesign is an effective strategy for the development of efficient L-tyrosine-producing bacteria. PMID:26346938

  15. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2013-10-01

    Based on rainfall intensity-duration-frequency (IDF) curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological

  16. The Value of Methodical Management: Optimizing Science Results

    NASA Astrophysics Data System (ADS)

    Saby, Linnea

    2016-01-01

    As science progresses, making new discoveries in radio astronomy becomes increasingly complex. Instrumentation must be incredibly fine-tuned and well-understood, scientists must consider the skills and schedules of large research teams, and inter-organizational projects sometimes require coordination between observatories around the globe. Structured and methodical management allows scientists to work more effectively in this environment and leads to optimal science output. This report outlines the principles of methodical project management in general, and describes how those principles are applied at the National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia.

  17. Optimal fault-tolerant control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2017-10-01

    For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.

  18. Design optimization of a high specific speed Francis turbine runner

    NASA Astrophysics Data System (ADS)

    Enomoto, Y.; Kurosawa, S.; Kawajiri, H.

    2012-11-01

    Francis turbine is used in many hydroelectric power stations. This paper presents the development of hydraulic performance in a high specific speed Francis turbine runner. In order to achieve the improvements of turbine efficiency throughout a wide operating range, a new runner design method which combines the latest Computational Fluid Dynamics (CFD) and a multi objective optimization method with an existing design system was applied in this study. The validity of the new design system was evaluated by model performance tests. As the results, it was confirmed that the optimized runner presented higher efficiency compared with an originally designed runner. Besides optimization of runner, instability vibration which occurred at high part load operating condition was investigated by model test and gas-liquid two-phase flow analysis. As the results, it was confirmed that the instability vibration was caused by oval cross section whirl which was caused by recirculation flow near runner cone wall.

  19. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    NASA Astrophysics Data System (ADS)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    2016-06-01

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20x to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.

  20. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20xmore » to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.« less

  1. Using genetic algorithms to achieve an automatic and global optimization of analogue methods for statistical downscaling of precipitation

    NASA Astrophysics Data System (ADS)

    Horton, Pascal; Weingartner, Rolf; Obled, Charles; Jaboyedoff, Michel

    2017-04-01

    Analogue methods (AMs) rely on the hypothesis that similar situations, in terms of atmospheric circulation, are likely to result in similar local or regional weather conditions. These methods consist of sampling a certain number of past situations, based on different synoptic-scale meteorological variables (predictors), in order to construct a probabilistic prediction for a local weather variable of interest (predictand). They are often used for daily precipitation prediction, either in the context of real-time forecasting, reconstruction of past weather conditions, or future climate impact studies. The relationship between predictors and predictands is defined by several parameters (predictor variable, spatial and temporal windows used for the comparison, analogy criteria, and number of analogues), which are often calibrated by means of a semi-automatic sequential procedure that has strong limitations. AMs may include several subsampling levels (e.g. first sorting a set of analogs in terms of circulation, then restricting to those with similar moisture status). The parameter space of the AMs can be very complex, with substantial co-dependencies between the parameters. Thus, global optimization techniques are likely to be necessary for calibrating most AM variants, as they can optimize all parameters of all analogy levels simultaneously. Genetic algorithms (GAs) were found to be successful in finding optimal values of AM parameters. They allow taking into account parameters inter-dependencies, and selecting objectively some parameters that were manually selected beforehand (such as the pressure levels and the temporal windows of the predictor variables), and thus obviate the need of assessing a high number of combinations. The performance scores of the optimized methods increased compared to reference methods, and this even to a greater extent for days with high precipitation totals. The resulting parameters were found to be relevant and spatially coherent

  2. Living donor liver transplantation for hepatocellular carcinoma achieves better outcomes.

    PubMed

    Lin, Chih-Che; Chen, Chao-Long

    2016-10-01

    Liver transplantation (LT) for hepatocellular carcinoma (HCC) at Kaohsiung Chang Gung Memorial Hospital mainly relies on live donor LT (LDLT). Owing to taking the risk of LD, we are obligated to adopt strict selection criteria for HCC patients and optimize the pre-transplant conditions to ensure a high disease-free survival similar to those without HCC, even better than deceased donor LT (DDLT). Better outcomes are attributed to excellent surgical results and optimal patient selection. The hospital mortality of primary and salvage LDLT are lower than 2% in our center. Although Taiwan Health Insurance Policy extended the Milan to University of California, San Francisco (UCSF) criteria in 2006, selection criteria will not be consolidated to take into account only by the morphologic size/number of tumors but also by their biology. The criteria are divided into modifiable image morphology, alpha fetoprotein (AFP), and positron emission tomography (PET) scan with standard uptake value (SUV) and unmodifiable unfavorable pathology such as HCC combined with cholangiocarcinoma (CC), sarcomatoid type, and poor differentiation. Downstaging therapy is necessary for HCC patients beyond criteria to fit all modifiable standards. The upper limit of downstaging treatment seems to be extended by more effective drug eluting transarterial chemoembolization in cases without absolute contraindications. In contrast, the pitfall of unmodifiable tumor pathology should be excluded by the findings of pretransplant core biopsy/resection if possible. More recently, achieving complete tumor necrosis in explanted liver could almost predict no recurrence after transplant. Necrotizing therapy is advised if possible before transplant even the tumor status within criteria to minimize the possibility of tumor recurrence. LDLT with low surgical mortality in experienced centers provides the opportunities of optimizing the pre-transplant tumor conditions and timing of transplant to achieve better

  3. Optimized quantum sensing with a single electron spin using real-time adaptive measurements

    NASA Astrophysics Data System (ADS)

    Bonato, C.; Blok, M. S.; Dinani, H. T.; Berry, D. W.; Markham, M. L.; Twitchen, D. J.; Hanson, R.

    2016-03-01

    Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.

  4. Novel multireceiver communication systems configurations based on optimal estimation theory

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1992-01-01

    A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.

  5. Optimal second order sliding mode control for linear uncertain systems.

    PubMed

    Das, Madhulika; Mahanta, Chitralekha

    2014-11-01

    In this paper an optimal second order sliding mode controller (OSOSMC) is proposed to track a linear uncertain system. The optimal controller based on the linear quadratic regulator method is designed for the nominal system. An integral sliding mode controller is combined with the optimal controller to ensure robustness of the linear system which is affected by parametric uncertainties and external disturbances. To achieve finite time convergence of the sliding mode, a nonsingular terminal sliding surface is added with the integral sliding surface giving rise to a second order sliding mode controller. The main advantage of the proposed OSOSMC is that the control input is substantially reduced and it becomes chattering free. Simulation results confirm superiority of the proposed OSOSMC over some existing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Can paying for results help to achieve the Millennium Development Goals? A critical review of selected evaluations of results-based financing.

    PubMed

    Oxman, Andrew D; Fretheim, Atle

    2009-08-01

    Results-based financing (RBF) refers to the transfer of money or material goods conditional on taking a measurable action or achieving a predetermined performance target. RBF is being promoted for helping to achieve the Millennium Development Goals (MDGs). We undertook a critical appraisal of selected evaluations of RBF schemes in the health sector in low and middle-income countries (LMIC). In addition, key informants were interviewed to identify literature relevant to the use of RBF in the health sector in LMIC, key examples, evaluations, and other key informants. The use of RBF in LMIC has commonly been a part of a package that may include increased funding, technical support, training, changes in management, and new information systems. It is not possible to disentangle the effects of financial incentives as one element of RBF schemes, and there is very limited evidence of RBF per se having an effect. RBF schemes can have unintended effects. When RBF schemes are used, they should be designed carefully, including the level at which they are targeted, the choice of targets and indicators, the type and magnitude of incentives, the proportion of financing that is paid based on results, and the ancillary components of the scheme. For RBF to be effective, it must be part of an appropriate package of interventions, and technical capacity or support must be available. RBF schemes should be monitored for possible unintended effects and evaluated using rigorous study designs. © 2009 Blackwell Publishing Asia Pty Ltd and Chinese Cochrane Center, West China Hospital of Sichuan University.

  7. Nilotinib dose-optimization in newly diagnosed chronic myeloid leukaemia in chronic phase: final results from ENESTxtnd.

    PubMed

    Hughes, Timothy P; Munhoz, Eduardo; Aurelio Salvino, Marco; Ong, Tee Chuan; Elhaddad, Alaa; Shortt, Jake; Quach, Hang; Pavlovsky, Carolina; Louw, Vernon J; Shih, Lee-Yung; Turkina, Anna G; Meillon, Luis; Jin, Yu; Acharya, Sandip; Dalal, Darshan; Lipton, Jeffrey H

    2017-10-01

    The Evaluating Nilotinib Efficacy and Safety in Clinical Trials-Extending Molecular Responses (ENESTxtnd) study was conducted to evaluate the kinetics of molecular response to nilotinib in patients with newly diagnosed chronic myeloid leukaemia in chronic phase and the impact of novel dose-optimization strategies on patient outcomes. The ENESTxtnd protocol allowed nilotinib dose escalation (from 300 to 400 mg twice daily) in the case of suboptimal response or treatment failure as well as dose re-escalation for patients with nilotinib dose reductions due to adverse events. Among 421 patients enrolled in ENESTxtnd, 70·8% (95% confidence interval, 66·2-75·1%) achieved major molecular response (BCR-ABL1 ≤ 0·1% on the International Scale) by 12 months (primary endpoint). By 24 months, 81·0% of patients achieved major molecular response, including 63·6% (56 of 88) of those with dose escalations for lack of efficacy and 74·3% (55 of 74) of those with dose reductions due to adverse events (including 43 of 54 patients with successful re-escalation). The safety profile of nilotinib was consistent with prior studies. The most common non-haematological adverse events were headache, rash, and nausea; cardiovascular events were reported in 4·5% of patients (grade 3/4, 3·1%). The study was registered at clinicaltrials.gov (NCT01254188). © 2017 The Authors. British Journal of Haematology published by John Wiley & Sons Ltd.

  8. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The

  9. Utilization of Optimization for Design of Morphing Wing Structures for Enhanced Flight

    NASA Astrophysics Data System (ADS)

    Detrick, Matthew Scott

    Conventional aircraft control surfaces constrain maneuverability. This work is a comprehensive study that looks at both smart material and conventional actuation methods to achieve wing twist to potentially improve flight capability using minimal actuation energy while allowing minimal wing deformation under aerodynamic loading. A continuous wing is used in order to reduce drag while allowing the aircraft to more closely approximate the wing deformation used by birds while loitering. The morphing wing for this work consists of a skin supported by an underlying truss structure whose goal is to achieve a given roll moment using less actuation energy than conventional control surfaces. A structural optimization code has been written in order to achieve minimal wing deformation under aerodynamic loading while allowing wing twist under actuation. The multi-objective cost function for the optimization consists of terms that ensure small deformation under aerodynamic loading, small change in airfoil shape during wing twist, a linear variation of wing twist along the length of the wing, small deviation from the desired wing twist, minimal number of truss members, minimal wing weight, and minimal actuation energy. Hydraulic cylinders and a two member linkage driven by a DC motor are tested separately to provide actuation. Since the goal of the current work is simply to provide a roll moment, only one actuator is implemented along the wing span. Optimization is also used to find the best location within the truss structure for the actuator. The active structure produced by optimization is then compared to simulated and experimental results from other researchers as well as characteristics of conventional aircraft.

  10. Optimization of Aspergillus niger rock phosphate solubilization in solid-state fermentation and use of the resulting product as a P fertilizer.

    PubMed

    Mendes, Gilberto de Oliveira; da Silva, Nina Morena Rêgo Muniz; Anastácio, Thalita Cardoso; Vassilev, Nikolay Bojkov; Ribeiro, José Ivo; da Silva, Ivo Ribeiro; Costa, Maurício Dutra

    2015-11-01

    A biotechnological strategy for the production of an alternative P fertilizer is described in this work. The fertilizer was produced through rock phosphate (RP) solubilization by Aspergillus niger in a solid-state fermentation (SSF) with sugarcane bagasse as substrate. SSF conditions were optimized by the surface response methodology after an initial screening of factors with significant effect on RP solubilization. The optimized levels of the factors were 865 mg of biochar, 250 mg of RP, 270 mg of sucrose and 6.2 ml of water per gram of bagasse. At this optimal setting, 8.6 mg of water-soluble P per gram of bagasse was achieved, representing an increase of 2.4 times over the non-optimized condition. The optimized SSF product was partially incinerated at 350°C (SB-350) and 500°C (SB-500) to reduce its volume and, consequently, increase P concentration. The post-processed formulations of the SSF product were evaluated in a soil-plant experiment. The formulations SB-350 and SB-500 increased the growth and P uptake of common bean plants (Phaseolus vulgaris L.) when compared with the non-treated RP. Furthermore, these two formulations had a yield relative to triple superphosphate of 60% (on a dry mass basis). Besides increasing P concentration, incineration improved the SSF product performance probably by decreasing microbial immobilization of nutrients during the decomposition of the remaining SSF substrate. The process proposed is a promising alternative for the management of P fertilization since it enables the utilization of low-solubility RPs and relies on the use of inexpensive materials. © 2015 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  11. Optimizing phase to enhance optical trap stiffness.

    PubMed

    Taylor, Michael A

    2017-04-03

    Phase optimization offers promising capabilities in optical tweezers, allowing huge increases in the applied forces, trap stiff-ness, or measurement sensitivity. One key obstacle to potential applications is the lack of an efficient algorithm to compute an optimized phase profile, with enhanced trapping experiments relying on slow programs that would take up to a week to converge. Here we introduce an algorithm that reduces the wait from days to minutes. We characterize the achievable in-crease in trap stiffness and its dependence on particle size, refractive index, and optical polarization. We further show that phase-only control can achieve almost all of the enhancement possible with full wavefront shaping; for instance phase control allows 62 times higher trap stiffness for 10 μm silica spheres in water, while amplitude control and non-trivial polarization further increase this by 1.26 and 1.01 respectively. This algorithm will facilitate future applications in optical trapping, and more generally in wavefront optimization.

  12. Coordinated Optimization of Distributed Energy Resources and Smart Loads in Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui; Zhang, Yingchen

    2016-11-14

    Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder andmore » results illustrate the superior control performance of the proposed approach.« less

  13. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  14. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  15. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  16. Optimization of Contrast Detection Power with Probabilistic Behavioral Information

    PubMed Central

    Cordes, Dietmar; Herzmann, Grit; Nandy, Rajesh; Curran, Tim

    2012-01-01

    Recent progress in the experimental design for event-related fMRI experiments made it possible to find the optimal stimulus sequence for maximum contrast detection power using a genetic algorithm. In this study, a novel algorithm is proposed for optimization of contrast detection power by including probabilistic behavioral information, based on pilot data, in the genetic algorithm. As a particular application, a recognition memory task is studied and the design matrix optimized for contrasts involving the familiarity of individual items (pictures of objects) and the recollection of qualitative information associated with the items (left/right orientation). Optimization of contrast efficiency is a complicated issue whenever subjects’ responses are not deterministic but probabilistic. Contrast efficiencies are not predictable unless behavioral responses are included in the design optimization. However, available software for design optimization does not include options for probabilistic behavioral constraints. If the anticipated behavioral responses are included in the optimization algorithm, the design is optimal for the assumed behavioral responses, and the resulting contrast efficiency is greater than what either a block design or a random design can achieve. Furthermore, improvements of contrast detection power depend strongly on the behavioral probabilities, the perceived randomness, and the contrast of interest. The present genetic algorithm can be applied to any case in which fMRI contrasts are dependent on probabilistic responses that can be estimated from pilot data. PMID:22326984

  17. [Optimization of radiological scoliosis assessment].

    PubMed

    Enríquez, Goya; Piqueras, Joaquim; Catalá, Ana; Oliva, Glòria; Ruiz, Agustí; Ribas, Montserrat; Duran, Carmina; Rodrigo, Carlos; Rodríguez, Eugenia; Garriga, Victoria; Maristany, Teresa; García-Fontecha, César; Baños, Joan; Muchart, Jordi; Alava, Fernando

    2014-07-01

    Most scoliosis are idiopathic (80%) and occur more frequently in adolescent girls. Plain radiography is the imaging method of choice, both for the initial study and follow-up studies but has the disadvantage of using ionizing radiation. The breasts are exposed to x-ray along these repeated examinations. The authors present a range of recommendations in order to optimize radiographic exam technique for both conventional and digital x-ray settings to prevent unnecessary patients' radiation exposure and to reduce the risk of breast cancer in patients with scoliosis. With analogue systems, leaded breast protectors should always be used, and with any radiographic equipment, analog or digital radiography, the examination should be performed in postero-anterior projection and optimized low-dose techniques. The ALARA (as low as reasonable achievable) rule should always be followed to achieve diagnostic quality images with the lowest feasible dose. Copyright © 2014. Published by Elsevier Espana.

  18. Shape optimization of electrostatically driven microcantilevers using simulated annealing to enhance static travel range

    NASA Astrophysics Data System (ADS)

    Trivedi, R. R.; Joglekar, M. M.; Shimpi, R. P.; Pawaskar, D. N.

    2013-12-01

    The objective of this paper is to present a systematic development of the generic shape optimization of elec- trostatically actuated microcantilever beams for extending their static travel range. Electrostatic actuators are widely used in micro electro mechanical system (MEMS) devices because of low power density and ease of fab- rication. However, their useful travel range is often restricted by a phenomenon known as pull-in instability. The Rayleigh- Ritz energy method is used for computation of pull-in parameters which includes electrostatic potential and fringing field effect. Appropriate width function and linear thickness functions are employed along the length of the non-prismatic beam to achieve enhanced travel range. Parameters used for varying the thick- ness and width functions are optimized using simulated annealing with pattern search method towards the end to refine the results. Appropriate penalties are imposed on the violation of volume, width, thickness and area constraints. Nine test cases are considered for demonstration of the said optimization method. Our results indicate that around 26% increase in the travel range of a non-prismatic beam can be achieved after optimiza- tion compared to that in a prismatic beam having the same volume. Our results also show an improvement in the pull-in displacement of around 5% compared to that of a variable width constant thickness actuator. We show that simulated annealing is an effective and flexible method to carry out design optimization of structural elements under electrostatic loading.

  19. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  20. Optimal swimming of a sheet.

    PubMed

    Montenegro-Johnson, Thomas D; Lauga, Eric

    2014-06-01

    Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments.

  1. Women are less likely than men to achieve optimal glycemic control after 1 year of treatment: A multi-level analysis of a Korean primary care cohort.

    PubMed

    Choe, Seung-Ah; Kim, Joo Yeong; Ro, Young Sun; Cho, Sung-Il

    2018-01-01

    We investigated differences in the achievement of glycemic control among newly diagnosed type-2 diabetes patients according to gender using a multi-clinic retrospective cohort study. Optimal glycemic control was defined as hemoglobin A1c (HbA1c) of less than 6.5% after 1 year of diabetes management. A generalized linear mixed model, which controlled for the fixed effects of baseline characteristics and prescribed oral hypoglycemic agent (OHA), was used to calculate the probability of achieving the target HbA1c. The study included 2,253 newly diagnosed type-2 diabetes patients who completed 1 year of diabetic management, including OHA, in the 36 participating primary clinics. Within the study population, the women had an older average age, were less likely to smoke or drink alcohol, and showed lower levels of fasting blood glucose and HbA1c at the time of diagnosis. There were no significant differences by sex in prescribed OHA or median number of visits. After 1 year of diabetes management, 38.9% of women and 40.6% of men achieved the target HbA1c-a small but significant difference. This suggests that type-2 diabetes is managed less well in women than in men.

  2. Reading Achievement State by State, 1999. Goal 3: Student Achievement and Citizenship.

    ERIC Educational Resources Information Center

    National Education Goals Panel (ED), Washington, DC.

    Noting that performance at the highest levels of achievement on the National Assessment of Educational Progress (NAEP) is evidence that students have demonstrated competency over challenging subject matter and achieved the third National Educational Goal, this report presents the most up-to-date results in reading achievement for the states and…

  3. New algorithms for optimal reduction of technical risks

    NASA Astrophysics Data System (ADS)

    Todinov, M. T.

    2013-06-01

    The article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of risk-reward activities in the risky prospects. A popular view, that if a single risk-reward bet with positive expected profit is unacceptable then a sequence of such identical risk-reward bets is also unacceptable, has been analysed and proved incorrect.

  4. Multi-objective optimization of chromatographic rare earth element separation.

    PubMed

    Knutson, Hans-Kristian; Holmqvist, Anders; Nilsson, Bernt

    2015-10-16

    The importance of rare earth elements in modern technological industry grows, and as a result the interest for developing separation processes increases. This work is a part of developing chromatography as a rare earth element processing method. Process optimization is an important step in process development, and there are several competing objectives that need to be considered in a chromatographic separation process. Most studies are limited to evaluating the two competing objectives productivity and yield, and studies of scenarios with tri-objective optimizations are scarce. Tri-objective optimizations are much needed when evaluating the chromatographic separation of rare earth elements due to the importance of product pool concentration along with productivity and yield as process objectives. In this work, a multi-objective optimization strategy considering productivity, yield and pool concentration is proposed. This was carried out in the frame of a model based optimization study on a batch chromatography separation of the rare earth elements samarium, europium and gadolinium. The findings from the multi-objective optimization were used to provide with a general strategy for achieving desirable operation points, resulting in a productivity ranging between 0.61 and 0.75 kgEu/mcolumn(3), h(-1) and a pool concentration between 0.52 and 0.79 kgEu/m(3), while maintaining a purity above 99% and never falling below an 80% yield for the main target component europium. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Noise Figure Optimization of Fully Integrated Inductively Degenerated Silicon Germanium HBT LNAs

    NASA Astrophysics Data System (ADS)

    Ibrahim, Mohamed Farhat

    Silicon germanium (SiGe) heterojunction bipolar transistors (HBTs) have the properties of producing very low noise and high gain over a wide bandwidth. Because of these properties, SiGe HBTs have continually improved and now compete with InP and GaAs HEMTs for low-noise amplification. This thesis investigates the theoretical characterizations and optimizations of SiGe HBT low noise amplifiers (LNAs) for low-noise low-power applications, using SiGe BiCMOS (bipolar complementary metal-oxide-semiconductor) technology. The theoretical characterization of SiGe HBT transistors is investigated by a comprehensive study of the DC and small-signal transistor modeling. Based on a selected small-signal model, a noise model for the SiGe HBT transistor is produced. This noise model is used to build a cascode inductively degenerated SiGe HBT LNA circuit. The noise figure (NF) equation for this LNA is derived. This NF equation shows better than 94.4% agreement with the simulation results. With the small-signal model verification, a new analytical method for optimizing the noise figure of the SiGe HBT LNA circuits is presented. The novelty feature of this optimization is the inclusion of the noise contributions of the base inductor parasitic resistance, the emitter inductor parasitic resistance and the bond-wire inductor parasitic resistances. The optimization is performed by reducing the number of design variables as possible. This improved theoretical optimization results in LNA designs that achieve better noise figure performance compared to previously published results in bipolar and BiCMOS technologies. Different design constraints are discussed for the LNA optimization techniques. Three different LNAs are designed. The three designs are fully integrated and fabricated in a single chip to achieve a fully monolithic realization. The LNA designs are experimentally verified. The low noise design produced a NF of 1.5dB, S21 of 15dB, and power consumption of 15mW. The three LNA

  6. Functional and Structural Optimality in Plant Growth: A Crop Modelling Case Study

    NASA Astrophysics Data System (ADS)

    Caldararu, S.; Purves, D. W.; Smith, M. J.

    2014-12-01

    Simple mechanistic models of vegetation processes are essential both to our understanding of plant behaviour and to our ability to predict future changes in vegetation. One concept that can take us closer to such models is that of plant optimality, the hypothesis that plants aim to achieve an optimal state. Conceptually, plant optimality can be either structural or functional optimality. A structural constraint would mean that plants aim to achieve a certain structural characteristic such as an allometric relationship or nutrient content that allows optimal function. A functional condition refers to plants achieving optimal functionality, in most cases by maximising carbon gain. Functional optimality conditions are applied on shorter time scales and lead to higher plasticity, making plants more adaptable to changes in their environment. In contrast, structural constraints are optimal given the specific environmental conditions that plants are adapted to and offer less flexibility. We exemplify these concepts using a simple model of crop growth. The model represents annual cycles of growth from sowing date to harvest, including both vegetative and reproductive growth and phenology. Structural constraints to growth are represented as an optimal C:N ratio in all plant organs, which drives allocation throughout the vegetative growing stage. Reproductive phenology - i.e. the onset of flowering and grain filling - is determined by a functional optimality condition in the form of maximising final seed mass, so that vegetative growth stops when the plant reaches maximum nitrogen or carbon uptake. We investigate the plants' response to variations in environmental conditions within these two optimality constraints and show that final yield is most affected by changes during vegetative growth which affect the structural constraint.

  7. The Effect of Primary School Size on Academic Achievement

    ERIC Educational Resources Information Center

    Gershenson, Seth; Langbein, Laura

    2015-01-01

    Evidence on optimal school size is mixed. We estimate the effect of transitory changes in school size on the academic achievement of fourth-and fifth-grade students in North Carolina using student-level longitudinal administrative data. Estimates of value-added models that condition on school-specific linear time trends and a variety of…

  8. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2012-12-01

    Based on rainfall intensity-duration-frequency (IDF) curves, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimisation can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short and a long term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum

  9. Stochastic simulation and robust design optimization of integrated photonic filters

    NASA Astrophysics Data System (ADS)

    Weng, Tsui-Wei; Melati, Daniele; Melloni, Andrea; Daniel, Luca

    2017-01-01

    Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%-35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  10. Optimization of radioactive sources to achieve the highest precision in three-phase flow meters using Jaya algorithm.

    PubMed

    Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M

    2018-05-17

    Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. A concept analysis of optimality in perinatal health.

    PubMed

    Kennedy, Holly Powell

    2006-01-01

    This analysis was conducted to describe the concept of optimality and its appropriateness for perinatal health care. The concept was identified in 24 scientific disciplines. Across all disciplines, the universal definition of optimality is the robust, efficient, and cost-effective achievement of best possible outcomes within a rule-governed framework. Optimality, specifically defined for perinatal health care, is the maximal perinatal outcome with minimal intervention placed against the context of the woman's social, medical, and obstetric history.

  12. Optimizing single-nanoparticle two-photon microscopy by in situ adaptive control of femtosecond pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Donghai; Deng, Yongkai; Chu, Saisai

    2016-07-11

    Single-nanoparticle two-photon microscopy shows great application potential in super-resolution cell imaging. Here, we report in situ adaptive optimization of single-nanoparticle two-photon luminescence signals by phase and polarization modulations of broadband laser pulses. For polarization-independent quantum dots, phase-only optimization was carried out to compensate the phase dispersion at the focus of the objective. Enhancement of the two-photon excitation fluorescence intensity under dispersion-compensated femtosecond pulses was achieved. For polarization-dependent single gold nanorod, in situ polarization optimization resulted in further enhancement of two-photon photoluminescence intensity than phase-only optimization. The application of in situ adaptive control of femtosecond pulse provides a way for object-orientedmore » optimization of single-nanoparticle two-photon microscopy for its future applications.« less

  13. A Technical Survey on Optimization of Processing Geo Distributed Data

    NASA Astrophysics Data System (ADS)

    Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.

    2018-04-01

    With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.

  14. Optimizing outcome after cardiac arrest.

    PubMed

    Nolan, Jerry P

    2011-10-01

    To discuss recent data relating to survival rates after cardiac arrest and interventions that can be used to optimize outcome. A recent analysis of 70 studies indicates that following out-of-hospital cardiac arrest (OHCA), 7.6% of patients will survive to hospital discharge (95% confidence interval 6.7-8.4). Following in-hospital cardiac arrest, 18% of patients will survive to hospital discharge. Survival may be optimized by increasing the rate of bystander cardiopulmonary resuscitation (CPR), which can be achieved by improving recognition of cardiac arrest, simplifying CPR and training more of the community. Feedback systems improve the quality of CPR but this has yet to be translated into improved outcome. One study has shown improved survival following OHCA with active compression-decompression CPR combined with an impedance-threshold device. In those who have no obvious extracardiac cause of OHCA, 70% have at least one significant coronary lesion demonstrable by coronary angiography. Although generally reserved for those with ST-elevation myocardial infarction, primary percutaneous coronary intervention may also benefit OHCA survivors with ECG patterns other than ST elevation. The term 'mild therapeutic hypothermia' has been replaced by the term 'targeted temperature management'; its role in optimizing outcome after cardiac arrest continues to be defined. In several centres, survival rates following OHCA are increasing. All links in the chain of survival must be optimized if a good-quality neurological outcome is to be achieved.

  15. Optimal In-Hospital and Discharge Medical Therapy in Acute Coronary Syndromes in Kerala: Results from the Kerala ACS Registry

    PubMed Central

    Huffman, Mark D; Prabhakaran, Dorairaj; Abraham, AK; Krishnan, Mangalath Narayanan; Nambiar, C. Asokan; Mohanan, Padinhare Purayil

    2013-01-01

    Background In-hospital and post-discharge treatment rates for acute coronary syndrome (ACS) remain low in India. However, little is known about the prevalence and predictors of the package of optimal ACS medical care in India. Our objective was to define the prevalence, predictors, and impact of optimal in-hospital and discharge medical therapy in the Kerala ACS Registry of 25,718 admissions. Methods and Results We defined optimal in-hospital ACS medical therapy as receiving the following five medications: aspirin, clopidogrel, heparin, beta-blocker, and statin. We defined optimal discharge ACS medical therapy as receiving all of the above therapies except heparin. Comparisons by optimal vs. non-optimal ACS care were made via Student’s t test for continuous variables and chi-square test for categorical variables. We created random effects logistic regression models to evaluate the association between GRACE risk score variables and optimal in-hospital or discharge medical therapy. Optimal in-hospital and discharge medical care was delivered in 40% and 46% of admissions, respectively. Wide variability in both in-hospital and discharge medical care was present with few hospitals reaching consistently high (>90%) levels. Patients receiving optimal in-hospital medical therapy had an adjusted OR (95%CI)=0.93 (0.71, 1.22) for in-hospital death and an adjusted OR (95%CI)=0.79 (0.63, 0.99) for MACE. Patients who received optimal in-hospital medical care were far more likely to receive optimal discharge care (adjusted OR [95%CI]=10.48 [9.37, 11.72]). Conclusions Strategies to improve in-hospital and discharge medical therapy are needed to improve local process-of-care measures and improve ACS outcomes in Kerala. PMID:23800985

  16. Joint optimization of regional water-power systems

    NASA Astrophysics Data System (ADS)

    Pereira-Cardenal, Silvio J.; Mo, Birger; Gjelsvik, Anders; Riegels, Niels D.; Arnbjerg-Nielsen, Karsten; Bauer-Gottwein, Peter

    2016-06-01

    Energy and water resources systems are tightly coupled; energy is needed to deliver water and water is needed to extract or produce energy. Growing pressure on these resources has raised concerns about their long-term management and highlights the need to develop integrated solutions. A method for joint optimization of water and electric power systems was developed in order to identify methodologies to assess the broader interactions between water and energy systems. The proposed method is to include water users and power producers into an economic optimization problem that minimizes the cost of power production and maximizes the benefits of water allocation, subject to constraints from the power and hydrological systems. The method was tested on the Iberian Peninsula using simplified models of the seven major river basins and the power market. The optimization problem was successfully solved using stochastic dual dynamic programming. The results showed that current water allocation to hydropower producers in basins with high irrigation productivity, and to irrigation users in basins with high hydropower productivity was sub-optimal. Optimal allocation was achieved by managing reservoirs in very distinct ways, according to the local inflow, storage capacity, hydropower productivity, and irrigation demand and productivity. This highlights the importance of appropriately representing the water users' spatial distribution and marginal benefits and costs when allocating water resources optimally. The method can handle further spatial disaggregation and can be extended to include other aspects of the water-energy nexus.

  17. Energy optimization for upstream data transfer in 802.15.4 beacon-enabled star formulation

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Krishnamachari, Bhaskar

    2008-08-01

    Energy saving is one of the major concerns for low rate personal area networks. This paper models energy consumption for beacon-enabled time-slotted media accessing control cooperated with sleeping scheduling in a star network formulation for IEEE 802.15.4 standard. We investigate two different upstream (data transfer from devices to a network coordinator) strategies: a) tracking strategy: the devices wake up and check status (track the beacon) in each time slot; b) non-tracking strategy: nodes only wake-up upon data arriving and stay awake till data transmitted to the coordinator. We consider the tradeoff between energy cost and average data transmission delay for both strategies. Both scenarios are formulated as optimization problems and the optimal solutions are discussed. Our results show that different data arrival rate and system parameters (such as contention access period interval, upstream speed etc.) result in different strategies in terms of energy optimization with maximum delay constraints. Hence, according to different applications and system settings, different strategies might be chosen by each node to achieve energy optimization for both self-interested view and system view. We give the relation among the tunable parameters by formulas and plots to illustrate which strategy is better under corresponding parameters. There are two main points emphasized in our results with delay constraints: on one hand, when the system setting is fixed by coordinator, nodes in the network can intelligently change their strategies according to corresponding application data arrival rate; on the other hand, when the nodes' applications are known by the coordinator, the coordinator can tune the system parameters to achieve optimal system energy consumption.

  18. Optimization of solid-state fermentation conditions for Trichoderma harzianum using an orthogonal test.

    PubMed

    Zhang, J D; Yang, Q

    2015-03-13

    The aim of this study was to develop a protocol for the production of fungal bio-pesticides with high efficiency, low cost, and non-polluting fermentation, while also increasing their survival rate under field conditions. This is the first study to develop biocontrol Trichoderma harzianum transformants TS1 that are resistant to benzimidazole fungicides. Agricultural corn stover and wheat bran waste were used as a medium and inducing carbon source for solid fermentation. Spore production was observed, and the method was optimized using single-factor tests with 4 factors at 3 levels in an orthogonal experimental design to determine the optimal culture conditions for T. harzianum TS1. In this step, we determined the best conditions for fermenting the biocontrol fungi. The optimal culture conditions for T. harzianum TS1 were cultivated for 8 days, a ratio of straw to wheat bran of 1:3, ammonium persulfate as the nitrogen source, and a water content of 30 mL. Under optimal culture conditions, the sporulation of T. harzianum TS1 reached 1.49 x 10(10) CFU/g, which was 1.46-fold higher than that achieved before optimization. Increased sporulation of T. harzianum TS1 results in better utilization of space and nutrients to achieve control of plant pathogens. This method allows for the recycling of agricultural waste straw.

  19. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  20. Self-Contained Automated Methodology for Optimal Flow Control

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Gunzburger, Max D.; Nicolaides, Roy A.; Erlebacherl, Gordon; Hussaini, M. Yousuff

    1997-01-01

    This paper describes a self-contained, automated methodology for active flow control which couples the time-dependent Navier-Stokes system with an adjoint Navier-Stokes system and optimality conditions from which optimal states, i.e., unsteady flow fields and controls (e.g., actuators), may be determined. The problem of boundary layer instability suppression through wave cancellation is used as the initial validation case to test the methodology. Here, the objective of control is to match the stress vector along a portion of the boundary to a given vector; instability suppression is achieved by choosing the given vector to be that of a steady base flow. Control is effected through the injection or suction of fluid through a single orifice on the boundary. The results demonstrate that instability suppression can be achieved without any a priori knowledge of the disturbance, which is significant because other control techniques have required some knowledge of the flow unsteadiness such as frequencies, instability type, etc. The present methodology has been extended to three dimensions and may potentially be applied to separation control, re-laminarization, and turbulence control applications using one to many sensors and actuators.

  1. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  2. Performance Optimization of Marine Science and Numerical Modeling on HPC Cluster

    PubMed Central

    Yang, Dongdong; Yang, Hailong; Wang, Luming; Zhou, Yucong; Zhang, Zhiyuan; Wang, Rui; Liu, Yi

    2017-01-01

    Marine science and numerical modeling (MASNUM) is widely used in forecasting ocean wave movement, through simulating the variation tendency of the ocean wave. Although efforts have been devoted to improve the performance of MASNUM from various aspects by existing work, there is still large space unexplored for further performance improvement. In this paper, we aim at improving the performance of propagation solver and data access during the simulation, in addition to the efficiency of output I/O and load balance. Our optimizations include several effective techniques such as the algorithm redesign, load distribution optimization, parallel I/O and data access optimization. The experimental results demonstrate that our approach achieves higher performance compared to the state-of-the-art work, about 3.5x speedup without degrading the prediction accuracy. In addition, the parameter sensitivity analysis shows our optimizations are effective under various topography resolutions and output frequencies. PMID:28045972

  3. Review of design optimization methods for turbomachinery aerodynamics

    NASA Astrophysics Data System (ADS)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  4. Progress on Optimizing Miscanthus Biomass Production for the European Bioeconomy: Results of the EU FP7 Project OPTIMISC.

    PubMed

    Lewandowski, Iris; Clifton-Brown, John; Trindade, Luisa M; van der Linden, Gerard C; Schwarz, Kai-Uwe; Müller-Sämann, Karl; Anisimov, Alexander; Chen, C-L; Dolstra, Oene; Donnison, Iain S; Farrar, Kerrie; Fonteyne, Simon; Harding, Graham; Hastings, Astley; Huxley, Laurie M; Iqbal, Yasir; Khokhlov, Nikolay; Kiesel, Andreas; Lootens, Peter; Meyer, Heike; Mos, Michal; Muylle, Hilde; Nunn, Chris; Özgüven, Mensure; Roldán-Ruiz, Isabel; Schüle, Heinrich; Tarakanov, Ivan; van der Weijde, Tim; Wagner, Moritz; Xi, Qingguo; Kalinina, Olena

    2016-01-01

    This paper describes the complete findings of the EU-funded research project OPTIMISC, which investigated methods to optimize the production and use of miscanthus biomass. Miscanthus bioenergy and bioproduct chains were investigated by trialing 15 diverse germplasm types in a range of climatic and soil environments across central Europe, Ukraine, Russia, and China. The abiotic stress tolerances of a wider panel of 100 germplasm types to drought, salinity, and low temperatures were measured in the laboratory and a field trial in Belgium. A small selection of germplasm types was evaluated for performance in grasslands on marginal sites in Germany and the UK. The growth traits underlying biomass yield and quality were measured to improve regional estimates of feedstock availability. Several potential high-value bioproducts were identified. The combined results provide recommendations to policymakers, growers and industry. The major technical advances in miscanthus production achieved by OPTIMISC include: (1) demonstration that novel hybrids can out-yield the standard commercially grown genotype Miscanthus x giganteus; (2) characterization of the interactions of physiological growth responses with environmental variation within and between sites; (3) quantification of biomass-quality-relevant traits; (4) abiotic stress tolerances of miscanthus genotypes; (5) selections suitable for production on marginal land; (6) field establishment methods for seeds using plugs; (7) evaluation of harvesting methods; and (8) quantification of energy used in densification (pellet) technologies with a range of hybrids with differences in stem wall properties. End-user needs were addressed by demonstrating the potential of optimizing miscanthus biomass composition for the production of ethanol and biogas as well as for combustion. The costs and life-cycle assessment of seven miscanthus-based value chains, including small- and large-scale heat and power, ethanol, biogas, and insulation

  5. Optimization of chiral lattice based metastructures for broadband vibration suppression using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.

    2016-05-01

    One of the major challenges in civil, mechanical, and aerospace engineering is to develop vibration suppression systems with high efficiency and low cost. Recent studies have shown that high damping performance at broadband frequencies can be achieved by incorporating periodic inserts with tunable dynamic properties as internal resonators in structural systems. Structures featuring these kinds of inserts are referred to as metamaterials inspired structures or metastructures. Chiral lattice inserts exhibit unique characteristics such as frequency bandgaps which can be tuned by varying the parameters that define the lattice topology. Recent analytical and experimental investigations have shown that broadband vibration attenuation can be achieved by including chiral lattices as internal resonators in beam-like structures. However, these studies have suggested that the performance of chiral lattice inserts can be maximized by utilizing an efficient optimization technique to obtain the optimal topology of the inserted lattice. In this study, an automated optimization procedure based on a genetic algorithm is applied to obtain the optimal set of parameters that will result in chiral lattice inserts tuned properly to reduce the global vibration levels of a finite-sized beam. Genetic algorithms are considered in this study due to their capability of dealing with complex and insufficiently understood optimization problems. In the optimization process, the basic parameters that govern the geometry of periodic chiral lattices including the number of circular nodes, the thickness of the ligaments, and the characteristic angle are considered. Additionally, a new set of parameters is introduced to enable the optimization process to explore non-periodic chiral designs. Numerical simulations are carried out to demonstrate the efficiency of the optimization process.

  6. Thermal optimality of net ecosystem exchange of carbon dioxide and underlying mechanisms

    USDA-ARS?s Scientific Manuscript database

    It has been well established that individual organisms can acclimate and adapt to temperature change to optimize their performance (i.e., achieve thermal optimality). However, whether ecosystems with an assembly of organisms would also undergo thermal optimization has not been examined on a broader ...

  7. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    PubMed

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  8. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  9. Optimal robust control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  10. Energy Efficiency Optimization in Relay-Assisted MIMO Systems With Perfect and Statistical CSI

    NASA Astrophysics Data System (ADS)

    Zappone, Alessio; Cao, Pan; Jorswieck, Eduard A.

    2014-01-01

    A framework for energy-efficient resource allocation in a single-user, amplify-and-forward relay-assisted MIMO system is devised in this paper. Previous results in this area have focused on rate maximization or sum power minimization problems, whereas fewer results are available when bits/Joule energy efficiency (EE) optimization is the goal. The performance metric to optimize is the ratio between the system's achievable rate and the total consumed power. The optimization is carried out with respect to the source and relay precoding matrices, subject to QoS and power constraints. Such a challenging non-convex problem is tackled by means of fractional programming and and alternating maximization algorithms, for various CSI assumptions at the source and relay. In particular the scenarios of perfect CSI and those of statistical CSI for either the source-relay or the relay-destination channel are addressed. Moreover, sufficient conditions for beamforming optimality are derived, which is useful in simplifying the system design. Numerical results are provided to corroborate the validity of the theoretical findings.

  11. A practical globalization of one-shot optimization for optimal design of tokamak divertors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blommaert, Maarten, E-mail: maarten.blommaert@kuleuven.be; Dekeyser, Wouter; Baelmans, Martine

    In past studies, nested optimization methods were successfully applied to design of the magnetic divertor configuration in nuclear fusion reactors. In this paper, so-called one-shot optimization methods are pursued. Due to convergence issues, a globalization strategy for the one-shot solver is sought. Whereas Griewank introduced a globalization strategy using a doubly augmented Lagrangian function that includes primal and adjoint residuals, its practical usability is limited by the necessity of second order derivatives and expensive line search iterations. In this paper, a practical alternative is offered that avoids these drawbacks by using a regular augmented Lagrangian merit function that penalizes onlymore » state residuals. Additionally, robust rank-two Hessian estimation is achieved by adaptation of Powell's damped BFGS update rule. The application of the novel one-shot approach to magnetic divertor design is considered in detail. For this purpose, the approach is adapted to be complementary with practical in parts adjoint sensitivities. Using the globalization strategy, stable convergence of the one-shot approach is achieved.« less

  12. Fuel consumption optimization for smart hybrid electric vehicle during a car-following process

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Xiangyu; Song, Jian

    2017-03-01

    Hybrid electric vehicles (HEVs) provide large potential to save energy and reduce emission, and smart vehicles bring out great convenience and safety for drivers. By combining these two technologies, vehicles may achieve excellent performances in terms of dynamic, economy, environmental friendliness, safety, and comfort. Hence, a smart hybrid electric vehicle (s-HEV) is selected as a platform in this paper to study a car-following process with optimizing the fuel consumption. The whole process is a multi-objective optimal problem, whose optimal solution is not just adding an energy management strategy (EMS) to an adaptive cruise control (ACC), but a deep fusion of these two methods. The problem has more restricted conditions, optimal objectives, and system states, which may result in larger computing burden. Therefore, a novel fuel consumption optimization algorithm based on model predictive control (MPC) is proposed and some search skills are adopted in receding horizon optimization to reduce computing burden. Simulations are carried out and the results indicate that the fuel consumption of proposed method is lower than that of the ACC+EMS method on the condition of ensuring car-following performances.

  13. Aerostructural Shape and Topology Optimization of Aircraft Wings

    NASA Astrophysics Data System (ADS)

    James, Kai

    approach. While the sequentially optimized wing exhibits a nearly-elliptical lift distribution, the MDO design seeks to push a greater portion of the load toward the root, thus reducing the structural deflection, and allowing for a lighter structure. By exploiting this trade-off, the MDO design achieves a 42% lower drag than the sequential result.

  14. Optimization of single photon detection model based on GM-APD

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Yang, Yi; Hao, Peiyu

    2017-11-01

    One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.

  15. A programing system for research and applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.

    1981-01-01

    The paper describes a computer programming system designed to be used for methodology research as well as applications in structural optimization. The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities existing in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of contraints and design variables. Features shown in numerical examples include: (1) variability of structural layout and overall shape geometry, (2) static strength and stiffness constraints, (3) local buckling failure, and (4) vibration constraints. The paper concludes with a review of the further development trends of this programing system.

  16. Using optimal combination of teaching-learning methods (open book assignment and group tutorials) as revision exercises to improve learning outcome in low achievers in biochemistry.

    PubMed

    Rajappa, Medha; Bobby, Zachariah; Nandeesha, H; Suryapriya, R; Ragul, Anithasri; Yuvaraj, B; Revathy, G; Priyadarssini, M

    2016-07-08

    Graduate medical students of India are taught Biochemistry by didactic lectures and they hardly get any opportunity to clarify their doubts and reinforce the concepts which they learn in these lectures. We used a combination of teaching-learning (T-L) methods (open book assignment followed by group tutorials) to study their efficacy in improving the learning outcome. About 143 graduate medical students were classified into low (<50%: group 1, n = 23), medium (50-75%: group 2, n = 74), and high (>75%: group 3, n = 46) achievers, based on their internal assessment marks. After the regular teaching module on the topics "Vitamins and Enzymology", all the students attempted an open book assignment without peer consultation. Then all the students participated in group tutorials. The effects on the groups were evaluated by pre and posttests at the end of each phase, with the same set of MCQs. Gain from group tutorials and overall gain was significantly higher in the low achievers, compared to other groups. High and medium achievers obtained more gain from open book assignment, than group tutorials. The overall gain was significantly higher than the gain obtained from open book assignment or group tutorials, in all three groups. All the three groups retained the gain even after 1 week of the exercise. Hence, optimal use of novel T-L methods (open book assignment followed by group tutorials) as revision exercises help in strengthening concepts in Biochemistry in this oft neglected group of low achievers in graduate medical education. © 2016 by The International Union of Biochemistry and Molecular Biology, 44(4):321-325, 2016. © 2016 The International Union of Biochemistry and Molecular Biology.

  17. Standardized model of porcine resuscitation using a custom-made resuscitation board results in optimal hemodynamic management.

    PubMed

    Wollborn, Jakob; Ruetten, Eva; Schlueter, Bjoern; Haberstroh, Joerg; Goebel, Ulrich; Schick, Martin A

    2018-01-22

    Standardized modeling of cardiac arrest and cardiopulmonary resuscitation (CPR) is crucial to evaluate new treatment options. Experimental porcine models are ideal, closely mimicking human-like physiology. However, anteroposterior chest diameter differs significantly, being larger in pigs and thus poses a challenge to achieve adequate perfusion pressures and consequently hemodynamics during CPR, which are commonly achieved during human resuscitation. The aim was to prove that standardized resuscitation is feasible and renders adequate hemodynamics and perfusion in pigs, using a specifically designed resuscitation board for a pneumatic chest compression device. A "porcine-fit" resuscitation board was designed for our experiments to optimally use a pneumatic compression device (LUCAS® II, Physio-Control Inc.), which is widely employed in emergency medicine and ideal in an experimental setting due to its high standardization. Asphyxial cardiac arrest was induced in 10 German hybrid landrace pigs and cardiopulmonary resuscitation was performed according to ERC/AHA 2015 guidelines with mechanical chest compressions. Hemodynamics were measured in the carotid and pulmonary artery. Furthermore, arterial blood gas was drawn to assess oxygenation and tissue perfusion. The custom-designed resuscitation board in combination with the LUCAS® device demonstrated highly sufficient performance regarding hemodynamics during CPR (mean arterial blood pressure, MAP 46 ± 1 mmHg and mean pulmonary artery pressure, mPAP of 36 ± 1 mmHg over the course of CPR). MAP returned to baseline values at 2 h after ROSC (80 ± 4 mmHg), requiring moderate doses of vasopressors. Furthermore, stroke volume and contractility were analyzed using pulse contour analysis (106 ± 3 ml and 1097 ± 22 mmHg/s during CPR). Blood gas analysis revealed CPR-typical changes, normalizing in the due course. Thermodilution parameters did not show persistent intravascular volume shift

  18. Particle swarm optimization: an alternative in marine propeller optimization?

    NASA Astrophysics Data System (ADS)

    Vesting, F.; Bensow, R. E.

    2018-01-01

    This article deals with improving and evaluating the performance of two evolutionary algorithm approaches for automated engineering design optimization. Here a marine propeller design with constraints on cavitation nuisance is the intended application. For this purpose, the particle swarm optimization (PSO) algorithm is adapted for multi-objective optimization and constraint handling for use in propeller design. Three PSO algorithms are developed and tested for the optimization of four commercial propeller designs for different ship types. The results are evaluated by interrogating the generation medians and the Pareto front development. The same propellers are also optimized utilizing the well established NSGA-II genetic algorithm to provide benchmark results. The authors' PSO algorithms deliver comparable results to NSGA-II, but converge earlier and enhance the solution in terms of constraints violation.

  19. Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware

    NASA Astrophysics Data System (ADS)

    Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe

    We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.

  20. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the in-flight optimization of the lift distribution across the wing for minimum drag of an aircraft in formation flight. The usual elliptical distribution that is optimal for a given wing with a given span is no longer optimal for the trailing wing in a formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum combined induced and profile drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  1. Construction Performance Optimization toward Green Building Premium Cost Based on Greenship Rating Tools Assessment with Value Engineering Method

    NASA Astrophysics Data System (ADS)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat

    2017-07-01

    Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.

  2. Thermally-Constrained Fuel-Optimal ISS Maneuvers

    NASA Technical Reports Server (NTRS)

    Bhatt, Sagar; Svecz, Andrew; Alaniz, Abran; Jang, Jiann-Woei; Nguyen, Louis; Spanos, Pol

    2015-01-01

    Optimal Propellant Maneuvers (OPMs) are now being used to rotate the International Space Station (ISS) and have saved hundreds of kilograms of propellant over the last two years. The savings are achieved by commanding the ISS to follow a pre-planned attitude trajectory optimized to take advantage of environmental torques. The trajectory is obtained by solving an optimal control problem. Prior to use on orbit, OPM trajectories are screened to ensure a static sun vector (SSV) does not occur during the maneuver. The SSV is an indicator that the ISS hardware temperatures may exceed thermal limits, causing damage to the components. In this paper, thermally-constrained fuel-optimal trajectories are presented that avoid an SSV and can be used throughout the year while still reducing propellant consumption significantly.

  3. Joint optimization of source, mask, and pupil in optical lithography

    NASA Astrophysics Data System (ADS)

    Li, Jia; Lam, Edmund Y.

    2014-03-01

    Mask topography effects need to be taken into consideration for more advanced resolution enhancement techniques in optical lithography. However, rigorous 3D mask model achieves high accuracy at a large computational cost. This work develops a combined source, mask and pupil optimization (SMPO) approach by taking advantage of the fact that pupil phase manipulation is capable of partially compensating for mask topography effects. We first design the pupil wavefront function by incorporating primary and secondary spherical aberration through the coefficients of the Zernike polynomials, and achieve optimal source-mask pair under the condition of aberrated pupil. Evaluations against conventional source mask optimization (SMO) without incorporating pupil aberrations show that SMPO provides improved performance in terms of pattern fidelity and process window sizes.

  4. Optimal design of reverse osmosis module networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maskan, F.; Wiley, D.E.; Johnston, L.P.M.

    2000-05-01

    The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less

  5. Coordinated Optimization of Distributed Energy Resources and Smart Loads in Distribution Systems: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui; Zhang, Yingchen

    2016-08-01

    Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder andmore » results illustrate the superior control performance of the proposed approach.« less

  6. Optimization of selection for growth in Menz Sheep while minimizing inbreeding depression in fitness traits

    PubMed Central

    2013-01-01

    The genetic trends in fitness (inbreeding, fertility and survival) of a closed nucleus flock of Menz sheep under selection during ten years for increased body weight were investigated to evaluate the consequences of selection for body weight on fitness. A mate selection tool was used to optimize in retrospect the actual selection and matings conducted over the project period to assess if the observed genetic gains in body weight could have been achieved with a reduced level of inbreeding. In the actual selection, the genetic trends for yearling weight, fertility of ewes and survival of lambs were 0.81 kg, –0.00026% and 0.016% per generation. The average inbreeding coefficient remained zero for the first few generations and then tended to increase over generations. The genetic gains achieved with the optimized retrospective selection and matings were highly comparable with the observed values, the correlation between the average breeding values of lambs born from the actual and optimized matings over the years being 0.99. However, the level of inbreeding with the optimized mate selections remained zero until late in the years of selection. Our results suggest that an optimal selection strategy that considers both genetic merits and coancestry of mates should be adopted to sustain the Menz sheep breeding program. PMID:23783076

  7. Multi-objective optimization of a continuous bio-dissimilation process of glycerol to 1, 3-propanediol.

    PubMed

    Xu, Gongxian; Liu, Ying; Gao, Qunwang

    2016-02-10

    This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A hybrid multi-objective evolutionary algorithm for wind-turbine blade optimization

    NASA Astrophysics Data System (ADS)

    Sessarego, M.; Dixon, K. R.; Rival, D. E.; Wood, D. H.

    2015-08-01

    A concurrent-hybrid non-dominated sorting genetic algorithm (hybrid NSGA-II) has been developed and applied to the simultaneous optimization of the annual energy production, flapwise root-bending moment and mass of the NREL 5 MW wind-turbine blade. By hybridizing a multi-objective evolutionary algorithm (MOEA) with gradient-based local search, it is believed that the optimal set of blade designs could be achieved in lower computational cost than for a conventional MOEA. To measure the convergence between the hybrid and non-hybrid NSGA-II on a wind-turbine blade optimization problem, a computationally intensive case was performed using the non-hybrid NSGA-II. From this particular case, a three-dimensional surface representing the optimal trade-off between the annual energy production, flapwise root-bending moment and blade mass was achieved. The inclusion of local gradients in the blade optimization, however, shows no improvement in the convergence for this three-objective problem.

  9. A multi-group firefly algorithm for numerical optimization

    NASA Astrophysics Data System (ADS)

    Tong, Nan; Fu, Qiang; Zhong, Caiming; Wang, Pengjun

    2017-08-01

    To solve the problem of premature convergence of firefly algorithm (FA), this paper analyzes the evolution mechanism of the algorithm, and proposes an improved Firefly algorithm based on modified evolution model and multi-group learning mechanism (IMGFA). A Firefly colony is divided into several subgroups with different model parameters. Within each subgroup, the optimal firefly is responsible for leading the others fireflies to implement the early global evolution, and establish the information mutual system among the fireflies. And then, each firefly achieves local search by following the brighter firefly in its neighbors. At the same time, learning mechanism among the best fireflies in various subgroups to exchange information can help the population to obtain global optimization goals more effectively. Experimental results verify the effectiveness of the proposed algorithm.

  10. Maximum cycle work output optimization for generalized radiative law Otto cycle engines

    NASA Astrophysics Data System (ADS)

    Xia, Shaojun; Chen, Lingen; Sun, Fengrui

    2016-11-01

    An Otto cycle internal combustion engine which includes thermal and friction losses is investigated by finite-time thermodynamics, and the optimization objective is the maximum cycle work output. The thermal energy transfer from the working substance to the cylinder inner wall follows the generalized radiative law (q∝Δ (Tn)). Under the condition that all of the fuel consumption, the compression ratio and the cycle period are given, the optimal piston trajectories for both the examples with unlimited and limited accelerations on every stroke are determined, and the cycle-period distribution among all strokes is also optimized. Numerical calculation results for the case of radiative law are provided and compared with those obtained for the cases of Newtonian law and linear phenomenological law. The results indicate that the optimal piston trajectory on each stroke contains three sections, which consist of an original maximum-acceleration and a terminal maximum-deceleration parts; for the case of radiative law, optimizing the piston motion path can achieve an improvement of more than 20% in both the cycle-work output and the second-law efficiency of the Otto cycle compared with the conventional near-sinusoidal operation, and heat transfer mechanisms have both qualitative and quantitative influences on the optimal paths of piston movements.

  11. Fifteen Years of Collaborative Innovation and Achievement: NASA Nebraska Space Grant Consortium 15-Year Program Performance and Results Report

    NASA Technical Reports Server (NTRS)

    Schaaf, Michaela M.; Bowen, Brent D.; Fink, Mary M.; Nickerson, Jocelyn S.; Avery, Shelly; Carstenson, Larry; Dugan, James; Farritor, Shane; Joyce, James; Rebrovich, Barb

    2003-01-01

    Condensing five years of significant work into a brief narrative fitting PPR requirements gave the affiliates of the Nebraska Space Grant a valuable chance for reflection. Achievements of Space Grant in Nebraska were judiciously chosen for this document that best illustrate the resultant synergism of this consortium, keeping in mind that these examples are only a representation of greater activity throughout the state. Following are highlights of many of the finer and personal achievements for Nebraska Space Grant. The Consortium welcomes inquiries to elaborate on any of these accomplishments.

  12. Optimized efficiency of all-electric ships by dc hybrid power systems

    NASA Astrophysics Data System (ADS)

    Zahedi, Bijan; Norum, Lars E.; Ludvigsen, Kristine B.

    2014-06-01

    Hybrid power systems with dc distribution are being considered for commercial marine vessels to comply with new stringent environmental regulations, and to achieve higher fuel economy. In this paper, detailed efficiency analysis of a shipboard dc hybrid power system is carried out. An optimization algorithm is proposed to minimize fuel consumption under various loading conditions. The studied system includes diesel engines, synchronous generator-rectifier units, a full-bridge bidirectional converter, and a Li-Ion battery bank as energy storage. In order to evaluate potential fuel saving provided by such a system, an online optimization strategy for fuel consumption is implemented. An Offshore Support Vessel (OSV) is simulated over different operating modes using the online control strategy. The resulted consumed fuel in the simulation is compared to that of a conventional ac power system, and also a dc power system without energy storage. The results show that while the dc system without energy storage provides noticeable fuel saving compared to the conventional ac system, optimal utilization of the energy storage in the dc system results in twice as much fuel saving.

  13. Harmony search optimization for HDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Panchal, Aditya

    In high dose-rate (HDR) prostate brachytherapy, multiple catheters are inserted interstitially into the target volume. The process of treating the prostate involves calculating and determining the best dose distribution to the target and organs-at-risk by means of optimizing the time that the radioactive source dwells at specified positions within the catheters. It is the goal of this work to investigate the use of a new optimization algorithm, known as Harmony Search, in order to optimize dwell times for HDR prostate brachytherapy. The new algorithm was tested on 9 different patients and also compared with the genetic algorithm. Simulations were performed to determine the optimal value of the Harmony Search parameters. Finally, multithreading of the simulation was examined to determine potential benefits. First, a simulation environment was created using the Python programming language and the wxPython graphical interface toolkit, which was necessary to run repeated optimizations. DICOM RT data from Varian BrachyVision was parsed and used to obtain patient anatomy and HDR catheter information. Once the structures were indexed, the volume of each structure was determined and compared to the original volume calculated in BrachyVision for validation. Dose was calculated using the AAPM TG-43 point source model of the GammaMed 192Ir HDR source and was validated against Varian BrachyVision. A DVH-based objective function was created and used for the optimization simulation. Harmony Search and the genetic algorithm were implemented as optimization algorithms for the simulation and were compared against each other. The optimal values for Harmony Search parameters (Harmony Memory Size [HMS], Harmony Memory Considering Rate [HMCR], and Pitch Adjusting Rate [PAR]) were also determined. Lastly, the simulation was modified to use multiple threads of execution in order to achieve faster computational times. Experimental results show that the volume calculation that was

  14. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  15. Optimizing rice yields while minimizing yield-scaled global warming potential.

    PubMed

    Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A

    2014-05-01

    To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. © 2013 John Wiley & Sons Ltd.

  16. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  17. Usage of Computers and Calculators and Students' Achievement: Results from TIMSS 2003

    ERIC Educational Resources Information Center

    Antonijevic, Radovan

    2007-01-01

    The paper deals with the facts obtained from TIMSS 2003 (Trends in International Mathematics and Science Study). This international comparative study, which includes 47 participant countries worldwide, explores dependence between eighth grade students' achievement in the areas of mathematics, physics, chemistry, biology and geography, and basic…

  18. Within-Teacher Variation of Causal Attributions of Low Achieving Students

    ERIC Educational Resources Information Center

    Jager, Lieke; Denessen, Eddie

    2015-01-01

    In teacher research, causal attributions of low achievement have been proven to be predictive of teachers' efforts to provide optimal learning contexts for all students. In most studies, however, attributions have been studied as a between-teacher variable rather than a within-teacher variable assuming that teachers' responses to low achievement…

  19. Infrared and visible image fusion based on visual saliency map and weighted least square optimization

    NASA Astrophysics Data System (ADS)

    Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua

    2017-05-01

    The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

  20. Optimal estimation of entanglement in optical qubit systems

    NASA Astrophysics Data System (ADS)

    Brida, Giorgio; Degiovanni, Ivo P.; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander P.

    2011-05-01

    We address the experimental determination of entanglement for systems made of a pair of polarization qubits. We exploit quantum estimation theory to derive optimal estimators, which are then implemented to achieve ultimate bound to precision. In particular, we present a set of experiments aimed at measuring the amount of entanglement for states belonging to different families of pure and mixed two-qubit two-photon states. Our scheme is based on visibility measurements of quantum correlations and achieves the ultimate precision allowed by quantum mechanics in the limit of Poissonian distribution of coincidence counts. Although optimal estimation of entanglement does not require the full tomography of the states we have also performed state reconstruction using two different sets of tomographic projectors and explicitly shown that they provide a less precise determination of entanglement. The use of optimal estimators also allows us to compare and statistically assess the different noise models used to describe decoherence effects occurring in the generation of entanglement.

  1. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    NASA Astrophysics Data System (ADS)

    Deufel, Christopher L.; Furutani, Keith M.

    2014-02-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.

  2. Minnesota Developmental Achievement Centers: 1987 Survey Results. Policy Analysis Series, No. 28.

    ERIC Educational Resources Information Center

    Minnesota Governor's Planning Council on Developmental Disabilities, St. Paul.

    This paper presents data collected from rehabilitation centers serving individuals with developmental disabilities in Minnesota, called Developmental Achievement Centers (DACs). The data focus on finances, programs, and clients, and are compared with data from previous years. All 97 providers of adult services in Minnesota completed the survey,…

  3. Optimization of the production process using virtual model of a workspace

    NASA Astrophysics Data System (ADS)

    Monica, Z.

    2015-11-01

    Optimization of the production process is an element of the design cycle consisting of: problem definition, modelling, simulation, optimization and implementation. Without the use of simulation techniques, the only thing which could be achieved is larger or smaller improvement of the process, not the optimization (i.e., the best result it is possible to get for the conditions under which the process works). Optimization is generally management actions that are ultimately bring savings in time, resources, and raw materials and improve the performance of a specific process. It does not matter whether it is a service or manufacturing process. Optimizing the savings generated by improving and increasing the efficiency of the processes. Optimization consists primarily of organizational activities that require very little investment, or rely solely on the changing organization of work. Modern companies operating in a market economy shows a significant increase in interest in modern methods of production management and services. This trend is due to the high competitiveness among companies that want to achieve success are forced to continually modify the ways to manage and flexible response to changing demand. Modern methods of production management, not only imply a stable position of the company in the sector, but also influence the improvement of health and safety within the company and contribute to the implementation of more efficient rules for standardization work in the company. This is why in the paper is presented the application of such developed environment like Siemens NX to create the virtual model of a production system and to simulate as well as optimize its work. The analyzed system is the robotized workcell consisting of: machine tools, industrial robots, conveyors, auxiliary equipment and buffers. In the program could be defined the control program realizing the main task in the virtual workcell. It is possible, using this tool, to optimize both the

  4. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization

  5. Accelerating IMRT optimization by voxel sampling

    NASA Astrophysics Data System (ADS)

    Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.

    2007-12-01

    This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.

  6. Design Optimization of Hybrid FRP/RC Bridge

    NASA Astrophysics Data System (ADS)

    Papapetrou, Vasileios S.; Tamijani, Ali Y.; Brown, Jeff; Kim, Daewon

    2018-04-01

    The hybrid bridge consists of a Reinforced Concrete (RC) slab supported by U-shaped Fiber Reinforced Polymer (FRP) girders. Previous studies on similar hybrid bridges constructed in the United States and Europe seem to substantiate these hybrid designs for lightweight, high strength, and durable highway bridge construction. In the current study, computational and optimization analyses were carried out to investigate six composite material systems consisting of E-glass and carbon fibers. Optimization constraints are determined by stress, deflection and manufacturing requirements. Finite Element Analysis (FEA) and optimization software were utilized, and a framework was developed to run the complete analyses in an automated fashion. Prior to that, FEA validation of previous studies on similar U-shaped FRP girders that were constructed in Poland and Texas is presented. A finer optimization analysis is performed for the case of the Texas hybrid bridge. The optimization outcome of the hybrid FRP/RC bridge shows the appropriate composite material selection and cross-section geometry that satisfies all the applicable Limit States (LS) and, at the same time, results in the lightest design. Critical limit states show that shear stress criteria determine the optimum design for bridge spans less than 15.24 m and deflection criteria controls for longer spans. Increased side wall thickness can reduce maximum observed shear stresses, but leads to a high weight penalty. A taller cross-section and a thicker girder base can efficiently lower the observed deflections and normal stresses. Finally, substantial weight savings can be achieved by the optimization framework if base and side-wall thickness are treated as independent variables.

  7. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  8. Kinematic Optimization in Birds, Bats and Ornithopters

    NASA Astrophysics Data System (ADS)

    Reichert, Todd

    Birds and bats employ a variety of advanced wing motions in the efficient production of thrust. The purpose of this thesis is to quantify the benefit of these advanced wing motions, determine the optimal theoretical wing kinematics for a given flight condition, and to develop a methodology for applying the results in the optimal design of flapping-wing aircraft (ornithopters). To this end, a medium-fidelity, combined aero-structural model has been developed that is capable of simulating the advanced kinematics seen in bird flight, as well as the highly non-linear structural deformations typical of high-aspect ratio wings. Five unique methods of thrust production observed in natural species have been isolated, quantified and thoroughly investigated for their dependence on Reynolds number, airfoil selection, frequency, amplitude and relative phasing. A gradient-based optimization algorithm has been employed to determined the wing kinematics that result in the minimum required power for a generalized aircraft or species in any given flight condition. In addition to the theoretical work, with the help of an extended team, the methodology was applied to the design and construction of the world's first successful human-powered ornithopter. The Snowbird Human-Powered Ornithopter, is used as an example aircraft to show how additional design constraints can pose limits on the optimal kinematics. The results show significant trends that give insight into the kinematic operation of natural species. The general result is that additional complexity, whether it be larger twisting deformations or advanced wing-folding mechanisms, allows for the possibility of more efficient flight. At its theoretical optimum, the efficiency of flapping-wings exceeds that of current rotors and propellers, although these efficiencies are quite difficult to achieve in practice.

  9. Optimal consensus algorithm integrated with obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Wang, Jianan; Xin, Ming

    2013-01-01

    This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.

  10. Aerodynamic optimization of wind turbine rotor using CFD/AD method

    NASA Astrophysics Data System (ADS)

    Cao, Jiufa; Zhu, Weijun; Wang, Tongguang; Ke, Shitang

    2018-05-01

    The current work describes a novel technique for wind turbine rotor optimization. The aerodynamic design and optimization of wind turbine rotor can be achieved with different methods, such as the semi-empirical engineering methods and more accurate computational fluid dynamic (CFD) method. The CFD method often provides more detailed aerodynamics features during the design process. However, high computational cost limits the application, especially for rotor optimization purpose. In this paper, a CFD-based actuator disc (AD) model is used to represent turbulent flow over a wind turbine rotor. The rotor is modeled as a permeable disc of equivalent area where the forces from the blades are distributed on the circular disc. The AD model is coupled with a Reynolds Averaged Navier-Stokes (RANS) solver such that the thrust and power are simulated. The design variables are the shape parameters comprising the chord, the twist and the relative thickness of the wind turbine rotor blade. The comparative aerodynamic performance is analyzed between the original and optimized reference wind turbine rotor. The results showed that the optimization framework can be effectively and accurately utilized in enhancing the aerodynamic performance of the wind turbine rotor.

  11. Multiple Detector Optimization for Hidden Radiation Source Detection

    DTIC Science & Technology

    2015-03-26

    important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code

  12. Predictive Analytics for Coordinated Optimization in Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  13. Restrictive annuloplasty to treat functional mitral regurgitation: optimize the restriction to improve the results?

    PubMed

    Totaro, Pasquale; Adragna, Nicola; Argano, Vincenzo

    2008-03-01

    Today, the 'gold standard' treatment of functional mitral regurgitation (MR) is the subject of much discussion. Although restrictive annuloplasty is currently considered the most reproducible technique, the means by which the degree of annular restriction is optimized remains problematic. The study was designed in order to identify whether the degree of restriction of the mitral annulus could influence early and midterm results following the treatment of functional MR using restrictive annuloplasty. A total of 32 consecutive patients with functional MR grade > or = 3+ was enrolled, among whom the mean anterior-posterior (AP) mitral annulus diameter was 39 +/- 3 mm. Restrictive mitral annuloplasty (combined with coronary artery bypass grafting) was performed in all patients using a Carpentier-Edwards Classic or Physio ring (size 26 or 28). The degree of AP annular restriction was calculated for each patient, and correlated with early and mid-term residual MR and left ventricular (LV) reverse remodeling (in terms of LV end-diastolic diameter (LVEDD) and LV end-diastolic volume (LVEDV) reduction). All surviving patients were examined at a one-year follow up. The mean AP mitral annulus restriction achieved was 48 +/- 4%. Intraoperatively, transesophageal echocardiography showed no residual MR in any patient. Before discharge from hospital, transthoracic echocardiography confirmed an absence of residual MR and showed significant LV reverse remodeling (LVEDV from 121 +/- 25 ml to 97 +/- 26 ml; LVEDD from 55 +/- 6 mm to 47 +/- 8 mm). A significant correlation (r = 0.57, p < 0.001) was identified between the degree of AP annulus restriction and LVEDV reduction. A cut-off of annular restriction of 40% (based on AP annulus measurement) correlated with a more significant reverse remodeling. The early postoperative data, with no recurrence of significant MR, was confirmed at a one-year follow up examination. A marked restriction of the AP mitral annulus diameter (> 40% of

  14. Optimization of the imaging response of scanning microwave microscopy measurements

    NASA Astrophysics Data System (ADS)

    Sardi, G. M.; Lucibello, A.; Kasper, M.; Gramse, G.; Proietti, E.; Kienberger, F.; Marcelli, R.

    2015-07-01

    In this work, we present the analytical modeling and preliminary experimental results for the choice of the optimal frequencies when performing amplitude and phase measurements with a scanning microwave microscope. In particular, the analysis is related to the reflection mode operation of the instrument, i.e., the acquisition of the complex reflection coefficient data, usually referred as S11. The studied configuration is composed of an atomic force microscope with a microwave matched nanometric cantilever probe tip, connected by a λ/2 coaxial cable resonator to a vector network analyzer. The set-up is provided by Keysight Technologies. As a peculiar result, the optimal frequencies, where the maximum sensitivity is achieved, are different for the amplitude and for the phase signals. The analysis is focused on measurements of dielectric samples, like semiconductor devices, textile pieces, and biological specimens.

  15. Optimal discrimination of M coherent states with a small quantum computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Marcus P. da; Guha, Saikat; Dutton, Zachary

    2014-12-04

    The ability to distinguish between coherent states optimally plays in important role in the efficient usage of quantum resources for classical communication and sensing applications. While it has been known since the early 1970’s how to optimally distinguish between two coherent states, generalizations to larger sets of coherent states have so far failed to reach optimality. In this work we outline how optimality can be achieved by using a small quantum computer, building on recent proposals for optimal qubit state discrimination with multiple copies.

  16. Confidence in Science and Achievement Outcomes of Fourth-Grade Students in Korea: Results from the TIMSS 2011 Assessment

    ERIC Educational Resources Information Center

    House, J. Daniel; Telese, James A.

    2017-01-01

    Findings from assessments of fourth-grade science have indicated that students in Korea scored higher than international averages. Research results have also shown that attitudes toward science were related to achievement outcomes for Korean students. The purpose of this study was to examine the relationship between confidence in science and…

  17. Design and Optimization of a 3-Coil Inductive Link for Efficient Wireless Power Transmission.

    PubMed

    Kiani, Mehdi; Jow, Uei-Ming; Ghovanloo, Maysam

    2011-07-14

    Inductive power transmission is widely used to energize implantable microelectronic devices (IMDs), recharge batteries, and energy harvesters. Power transfer efficiency (PTE) and power delivered to the load (PDL) are two key parameters in wireless links, which affect the energy source specifications, heat dissipation, power transmission range, and interference with other devices. To improve the PTE, a 4-coil inductive link has been recently proposed. Through a comprehensive circuit based analysis that can guide a design and optimization scheme, we have shown that despite achieving high PTE at larger coil separations, the 4-coil inductive links fail to achieve a high PDL. Instead, we have proposed a 3-coil inductive power transfer link with comparable PTE over its 4-coil counterpart at large coupling distances, which can also achieve high PDL. We have also devised an iterative design methodology that provides the optimal coil geometries in a 3-coil inductive power transfer link. Design examples of 2-, 3-, and 4-coil inductive links have been presented, and optimized for 13.56 MHz carrier frequency and 12 cm coupling distance, showing PTEs of 15%, 37%, and 35%, respectively. At this distance, the PDL of the proposed 3-coil inductive link is 1.5 and 59 times higher than its equivalent 2- and 4-coil links, respectively. For short coupling distances, however, 2-coil links remain the optimal choice when a high PDL is required, while 4-coil links are preferred when the driver has large output resistance or small power is needed. These results have been verified through simulations and measurements.

  18. MIMO-OFDM signal optimization for SAR imaging radar

    NASA Astrophysics Data System (ADS)

    Baudais, J.-Y.; Méric, S.; Riché, V.; Pottier, É.

    2016-12-01

    This paper investigates the optimization of the coded orthogonal frequency division multiplexing (OFDM) transmitted signal in a synthetic aperture radar (SAR) context. We propose to design OFDM signals to achieve range ambiguity mitigation. Indeed, range ambiguities are well known to be a limitation for SAR systems which operates with pulsed transmitted signal. The ambiguous reflected signal corresponding to one pulse is then detected when the radar has already transmitted the next pulse. In this paper, we demonstrate that the range ambiguity mitigation is possible by using orthogonal transmitted wave as OFDM pulses. The coded OFDM signal is optimized through genetic optimization procedures based on radar image quality parameters. Moreover, we propose to design a multiple-input multiple-output (MIMO) configuration to enhance the noise robustness of a radar system and this configuration is mainly efficient in the case of using orthogonal waves as OFDM pulses. The results we obtain show that OFDM signals outperform conventional radar chirps for range ambiguity suppression and for robustness enhancement in 2 ×2 MIMO configuration.

  19. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  20. Optimization of coronagraph design for segmented aperture telescopes

    NASA Astrophysics Data System (ADS)

    Jewell, Jeffrey; Ruane, Garreth; Shaklan, Stuart; Mawet, Dimitri; Redding, Dave

    2017-09-01

    The goal of directly imaging Earth-like planets in the habitable zone of other stars has motivated the design of coronagraphs for use with large segmented aperture space telescopes. In order to achieve an optimal trade-off between planet light throughput and diffracted starlight suppression, we consider coronagraphs comprised of a stage of phase control implemented with deformable mirrors (or other optical elements), pupil plane apodization masks (gray scale or complex valued), and focal plane masks (either amplitude only or complex-valued, including phase only such as the vector vortex coronagraph). The optimization of these optical elements, with the goal of achieving 10 or more orders of magnitude in the suppression of on-axis (starlight) diffracted light, represents a challenging non-convex optimization problem with a nonlinear dependence on control degrees of freedom. We develop a new algorithmic approach to the design optimization problem, which we call the "Auxiliary Field Optimization" (AFO) algorithm. The central idea of the algorithm is to embed the original optimization problem, for either phase or amplitude (apodization) in various planes of the coronagraph, into a problem containing additional degrees of freedom, specifically fictitious "auxiliary" electric fields which serve as targets to inform the variation of our phase or amplitude parameters leading to good feasible designs. We present the algorithm, discuss details of its numerical implementation, and prove convergence to local minima of the objective function (here taken to be the intensity of the on-axis source in a "dark hole" region in the science focal plane). Finally, we present results showing application of the algorithm to both unobscured off-axis and obscured on-axis segmented telescope aperture designs. The application of the AFO algorithm to the coronagraph design problem has produced solutions which are capable of directly imaging planets in the habitable zone, provided end

  1. Lipid Encapsulation Provides Insufficient Total-Tract Digestibility to Achieve an Optimal Transfer Efficiency of Fatty Acids to Milk Fat

    PubMed Central

    Bainbridge, Melissa; Kraft, Jana

    2016-01-01

    Transfer efficiencies of rumen-protected n-3 fatty acids (FA) to milk are low, thus we hypothesized that rumen-protection technologies allow for biohydrogenation and excretion of n-3 FA. The objectives of this study were to i) investigate the ruminal protection and post-ruminal release of the FA derived from the lipid-encapsulated echium oil (EEO), and ii) assess the bioavailability and metabolism of the EEO-derived FA through measuring the FA content in plasma lipid fractions, feces, and milk. The EEO was tested for rumen stability using the in situ nylon bag technique, then the apparent total-tract digestibility was assessed in vivo using six Holstein dairy cattle. Diets consisted of a control (no EEO); 1.5% of dry matter (DM) as EEO and 1.5% DM as encapsulation matrix; and 3% DM as EEO. The EEO was rumen-stable and had no effect on animal production. EEO-derived FA were incorporated into all plasma lipid fractions, with the highest proportion of n-3 FA observed in cholesterol esters. Fecal excretion of EEO-derived FA ranged from 7–14%. Biohydrogenation products increased in milk, plasma, and feces with EEO supplementation. In conclusion, lipid-encapsulation provides inadequate digestibility to achieve an optimal transfer efficiency of n-3 FA to milk. PMID:27741299

  2. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  3. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  4. Multi-objective Optimization of a Solar Humidification Dehumidification Desalination Unit

    NASA Astrophysics Data System (ADS)

    Rafigh, M.; Mirzaeian, M.; Najafi, B.; Rinaldi, F.; Marchesi, R.

    2017-11-01

    In the present paper, a humidification-dehumidification desalination unit integrated with solar system is considered. In the first step mathematical model of the whole plant is represented. Next, taking into account the logical constraints, the performance of the system is optimized. On one hand it is desired to have higher energetic efficiency, while on the other hand, higher efficiency results in an increment in the required area for each subsystem which consequently leads to an increase in the total cost of the plant. In the present work, the optimum solution is achieved when the specific energy of the solar heater and also the areas of humidifier and dehumidifier are minimized. Due to the fact that considered objective functions are in conflict, conventional optimization methods are not applicable. Hence, multi objective optimization using genetic algorithm which is an efficient tool for dealing with problems with conflicting objectives has been utilized and a set of optimal solutions called Pareto front each of which is a tradeoff between the mentioned objectives is generated.

  5. CT dose minimization using personalized protocol optimization and aggressive bowtie

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Yin, Zhye; Jin, Yannan; Wu, Mingye; Yao, Yangyang; Tao, Kun; Kalra, Mannudeep K.; De Man, Bruno

    2016-03-01

    In this study, we propose to use patient-specific x-ray fluence control to reduce the radiation dose to sensitive organs while still achieving the desired image quality (IQ) in the region of interest (ROI). The mA modulation profile is optimized view by view, based on the sensitive organs and the ROI, which are obtained from an ultra-low-dose volumetric CT scout scan [1]. We use a clinical chest CT scan to demonstrate the feasibility of the proposed concept: the breast region is selected as the sensitive organ region while the cardiac region is selected as IQ ROI. Two groups of simulations are performed based on the clinical CT dataset: (1) a constant mA scan adjusted based on the patient attenuation (120 kVp, 300 mA), which serves as baseline; (2) an optimized scan with aggressive bowtie and ROI centering combined with patient-specific mA modulation. The results shows that the combination of the aggressive bowtie and the optimized mA modulation can result in 40% dose reduction in the breast region, while the IQ in the cardiac region is maintained. More generally, this paper demonstrates the general concept of using a 3D scout scan for optimal scan planning.

  6. Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks.

    PubMed

    Dilkina, Bistra; Houtman, Rachel; Gomes, Carla P; Montgomery, Claire A; McKelvey, Kevin S; Kendall, Katherine; Graves, Tabitha A; Bernstein, Richard; Schwartz, Michael K

    2017-02-01

    Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a network of protected areas. We show that designing corridors for single species based on purely ecological criteria leads to extremely expensive linkages that are suboptimal for multispecies connectivity objectives. Similarly, acquiring the least-expensive linkages leads to ecologically poor solutions. We developed algorithms for optimizing corridors for multispecies use given a specific budget. We applied our approach in western Montana to demonstrate how the solutions may be used to evaluate trade-offs in connectivity for 2 species with different habitat requirements, different core areas, and different conservation values under different budgets. We evaluated corridors that were optimal for each species individually and for both species jointly. Incorporating a budget constraint and jointly optimizing for both species resulted in corridors that were close to the individual species movement-potential optima but with substantial cost savings. Our approach produced corridors that were within 14% and 11% of the best possible corridor connectivity for grizzly bears (Ursus arctos) and wolverines (Gulo gulo), respectively, and saved 75% of the cost. Similarly, joint optimization under a combined budget resulted in improved connectivity for both species relative to splitting the budget in 2 to optimize for each species individually. Our results demonstrate economies of scale and complementarities conservation planners can achieve by optimizing corridor designs for financial costs and for multiple species connectivity jointly. We believe that our approach will facilitate corridor conservation by reducing acquisition costs

  7. Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks

    USGS Publications Warehouse

    Dilkina, Bistra; Houtman, Rachel; Gomes, Carla P.; Montgomery, Claire A.; McKelvey, Kevin; Kendall, Katherine; Graves, Tabitha A.; Bernstein, Richard; Schwartz, Michael K.

    2017-01-01

    Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a network of protected areas. We show that designing corridors for single species based on purely ecological criteria leads to extremely expensive linkages that are suboptimal for multispecies connectivity objectives. Similarly, acquiring the least-expensive linkages leads to ecologically poor solutions. We developed algorithms for optimizing corridors for multispecies use given a specific budget. We applied our approach in western Montana to demonstrate how the solutions may be used to evaluate trade-offs in connectivity for 2 species with different habitat requirements, different core areas, and different conservation values under different budgets. We evaluated corridors that were optimal for each species individually and for both species jointly. Incorporating a budget constraint and jointly optimizing for both species resulted in corridors that were close to the individual species movement-potential optima but with substantial cost savings. Our approach produced corridors that were within 14% and 11% of the best possible corridor connectivity for grizzly bears (Ursus arctos) and wolverines (Gulo gulo), respectively, and saved 75% of the cost. Similarly, joint optimization under a combined budget resulted in improved connectivity for both species relative to splitting the budget in 2 to optimize for each species individually. Our results demonstrate economies of scale and complementarities conservation planners can achieve by optimizing corridor designs for financial costs and for multiple species connectivity jointly. We believe that our approach will facilitate corridor conservation by reducing acquisition costs

  8. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  9. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  10. The association between achieving low-density lipoprotein cholesterol (LDL-C) goal and statin treatment in an employee population.

    PubMed

    Burton, Wayne N; Chen, Chin-Yu; Schultz, Alyssa B; Edington, Dee W

    2010-02-01

    Statin medications are recommended for patients who have not achieved low-density lipoprotein cholesterol (LDL-C) goals through lifestyle modifications. The objective of this retrospective observational study was to examine statin medication usage patterns and the relationship with LDL-C goal levels (according to Adult Treatment Panel III guidelines) among a cohort of employees of a major financial services corporation. From 1995 to 2004, a total of 1607 executives participated in a periodic health examination program. An index date was assigned for each study participant (date of their exam) and statin medication usage was determined from the pharmacy claims database for 365 days before the index date. Patients were identified as adherent to statins if the medication possession ratio was > or =80%. In all, 150 (9.3%) executives filled at least 1 statin prescription in the 365 days prior to their exam. A total of 102 statin users (68%) were adherent to statin medication. Among all executives who received statin treatment, 70% (odds ratio [OR] = 2.30, 95% confidence interval [CI] = 1.82, 2.90) achieved near-optimal (<130 mg/dL) and 30% (OR = 1.78, 95% CI = 1.15, 2.76) achieved optimal (<100 mg/dL) LDL-C goals, which is significantly higher than the rates among statin nonusers (55% and 21%). Adherent statin users were more likely to achieve recommended near-optimal LDL-C goals compared to statin nonusers (overall P = 0.002; adherent: OR = 2.75, 95% CI = 1.662, 4.550), while nonadherent statin users were more likely to achieve the optimal goal compared to statin nonusers (OR = 2.223; CI = 1.145, 4.313). Statin usage was associated with improvements in LDL-C goal attainment among executives who participated in a periodic health examination. Appropriate statin medication adherence should be encouraged in working populations in order to achieve LDL-C goals.

  11. Optimization of free ammonia concentration for nitrite accumulation in shortcut biological nitrogen removal process.

    PubMed

    Chung, Jinwook; Shim, Hojae; Park, Seong-Jun; Kim, Seung-Jin; Bae, Wookeun

    2006-03-01

    A shortcut biological nitrogen removal (SBNR) utilizes the concept of a direct conversion of ammonium to nitrite and then to nitrogen gas. A successful SBNR requires accumulation of nitrite in the system and inhibition of the activity of nitrite oxidizers. A high concentration of free ammonia (FA) inhibits nitrite oxidizers, but unfortunately decreases the ammonium removal rate as well. Therefore, the optimal range of FA concentration is necessary not only to stabilize nitrite accumulation but also to achieve maximum ammonium removal. In order to derive such optimal FA concentrations, the specific substrate utilization rates of ammonium and nitrite oxidizers were measured. The optimal FA concentration range appeared to be 5-10 mg/L for the adapted sludge. The simulated results from the modified inhibition model expressed by FA and ammonium/nitrite concentrations were shown very similar to the experimental results.

  12. An optimized surface plasmon photovoltaic structure using energy transfer between discrete nano-particles.

    PubMed

    Lin, Albert; Fu, Sze-Ming; Chung, Yen-Kai; Lai, Shih-Yun; Tseng, Chi-Wei

    2013-01-14

    Surface plasmon enhancement has been proposed as a way to achieve higher absorption for thin-film photovoltaics, where surface plasmon polariton(SPP) and localized surface plasmon (LSP) are shown to provide dense near field and far field light scattering. Here it is shown that controlled far-field light scattering can be achieved using successive coupling between surface plasmonic (SP) nano-particles. Through genetic algorithm (GA) optimization, energy transfer between discrete nano-particles (ETDNP) is identified, which enhances solar cell efficiency. The optimized energy transfer structure acts like lumped-element transmission line and can properly alter the direction of photon flow. Increased in-plane component of wavevector is thus achieved and photon path length is extended. In addition, Wood-Rayleigh anomaly, at which transmission minimum occurs, is avoided through GA optimization. Optimized energy transfer structure provides 46.95% improvement over baseline planar cell. It achieves larger angular scattering capability compared to conventional surface plasmon polariton back reflector structure and index-guided structure due to SP energy transfer through mode coupling. Via SP mediated energy transfer, an alternative way to control the light flow inside thin-film is proposed, which can be more efficient than conventional index-guided mode using total internal reflection (TIR).

  13. Optimal and robust control of quantum state transfer by shaping the spectral phase of ultrafast laser pulses.

    PubMed

    Guo, Yu; Dong, Daoyi; Shu, Chuan-Cun

    2018-04-04

    Achieving fast and efficient quantum state transfer is a fundamental task in physics, chemistry and quantum information science. However, the successful implementation of the perfect quantum state transfer also requires robustness under practically inevitable perturbative defects. Here, we demonstrate how an optimal and robust quantum state transfer can be achieved by shaping the spectral phase of an ultrafast laser pulse in the framework of frequency domain quantum optimal control theory. Our numerical simulations of the single dibenzoterrylene molecule as well as in atomic rubidium show that optimal and robust quantum state transfer via spectral phase modulated laser pulses can be achieved by incorporating a filtering function of the frequency into the optimization algorithm, which in turn has potential applications for ultrafast robust control of photochemical reactions.

  14. Mathematics beliefs and achievement of adolescent students in Japan: results from the TIMSS 1999 assessment.

    PubMed

    House, J Daniel

    2005-12-01

    A recent study (1) of undergraduate students in a precalculus course indicated that they expressed slightly positive attitudes toward mathematics. It is important, however, to examine relationships between students' initial attitudes and achievement outcomes. The present purpose was to assess the relationship between self-beliefs and mathematics achievement for a large national sample of students from the TIMSS 1999 international sample (eighth graders) from Japan. Several significant relationships between mathematics beliefs and test scores were noted. In addition, the overall multiple regression equation that assessed the joint significance of the complete set of self-belief variables was significant (F7.65 = 159.48, p < .001) and explained 20.6% of the variance in mathematics achievement test scores.

  15. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  16. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  17. Design of underwater robot lines based on a hybrid automatic optimization strategy

    NASA Astrophysics Data System (ADS)

    Lyu, Wenjing; Luo, Weilin

    2014-09-01

    In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.

  18. Life cycle optimization model for integrated cogeneration and energy systems applications in buildings

    NASA Astrophysics Data System (ADS)

    Osman, Ayat E.

    Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a

  19. Manufacturing of glassy thin shell for adaptive optics: results achieved

    NASA Astrophysics Data System (ADS)

    Poutriquet, F.; Rinchet, A.; Carel, J.-L.; Leplan, H.; Ruch, E.; Geyl, R.; Marque, G.

    2012-07-01

    Glassy thin shells are key components for the development of adaptive optics and are part of future & innovative projects such as ELT. However, manufacturing thin shells is a real challenge. Even though optical requirements for the front face - or optical face - are relaxed compared to conventional passive mirrors, requirements concerning thickness uniformity are difficult to achieve. In addition, process has to be completely re-defined as thin mirror generates new manufacturing issues. In particular, scratches and digs requirement is more difficult as this could weaken the shell, handling is also an important issue due to the fragility of the mirror. Sagem, through REOSC program, has recently manufactured different types of thin shells in the frame of European projects: E-ELT M4 prototypes and VLT Deformable Secondary Mirror (VLT DSM).

  20. Preliminary Analysis of Optimal Round Trip Lunar Missions

    NASA Astrophysics Data System (ADS)

    Gagg Filho, L. A.; da Silva Fernandes, S.

    2015-10-01

    A study of optimal bi-impulsive trajectories of round trip lunar missions is presented in this paper. The optimization criterion is the total velocity increment. The dynamical model utilized to describe the motion of the space vehicle is a full lunar patched-conic approximation, which embraces the lunar patched-conic of the outgoing trip and the lunar patched-conic of the return mission. Each one of these parts is considered separately to solve an optimization problem of two degrees of freedom. The Sequential Gradient Restoration Algorithm (SGRA) is employed to achieve the optimal solutions, which show a good agreement with the ones provided by literature, and, proved to be consistent with the image trajectories theorem.

  1. Investigating multi-objective fluence and beam orientation IMRT optimization

    NASA Astrophysics Data System (ADS)

    Potrebko, Peter S.; Fiege, Jason; Biagioli, Matthew; Poleszczuk, Jan

    2017-07-01

    Radiation Oncology treatment planning requires compromises to be made between clinical objectives that are invariably in conflict. It would be beneficial to have a ‘bird’s-eye-view’ perspective of the full spectrum of treatment plans that represent the possible trade-offs between delivering the intended dose to the planning target volume (PTV) while optimally sparing the organs-at-risk (OARs). In this work, the authors demonstrate Pareto-aware radiotherapy evolutionary treatment optimization (PARETO), a multi-objective tool featuring such bird’s-eye-view functionality, which optimizes fluence patterns and beam angles for intensity-modulated radiation therapy (IMRT) treatment planning. The problem of IMRT treatment plan optimization is managed as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. To achieve this, PARETO is built around a powerful multi-objective evolutionary algorithm, called Ferret, which simultaneously optimizes multiple fitness functions that encode the attributes of the desired dose distribution for the PTV and OARs. The graphical interfaces within PARETO provide useful information such as: the convergence behavior during optimization, trade-off plots between the competing objectives, and a graphical representation of the optimal solution database allowing for the rapid exploration of treatment plan quality through the evaluation of dose-volume histograms and isodose distributions. PARETO was evaluated for two relatively complex clinical cases, a paranasal sinus and a pancreas case. The end result of each PARETO run was a database of optimal (non-dominated) treatment plans that demonstrated trade-offs between the OAR and PTV fitness functions, which were all equally good in the Pareto-optimal sense (where no one objective can be improved without worsening at least one other). Ferret was able to produce high quality solutions even though a large number of parameters

  2. Optimize scientific communication skills on work and energy concept with implementation of interactive conceptual instruction and multi representation approach

    NASA Astrophysics Data System (ADS)

    Patriot, E. A.; Suhandi, A.; Chandra, D. T.

    2018-05-01

    The ultimate goal of learning in the curriculum 2013 is that learning must improve and balance between soft skills and hard skills of learners. In addition to the knowledge aspect, one of the other skills to be trained in the learning process using a scientific approach is communication skills. This study aims to get an overview of the implementation of interactive conceptual instruction with multi representation to optimize the achievement of students’ scientific communication skills on work and energy concept. The scientific communication skills contains the sub-skills were searching the information, scientific writing, group discussion and knowledge presentation. This study was descriptive research with observation method. Subjects in this study were 35 students of class X in Senior High School at Sumedang. The results indicate an achievement of optimal scientific communication skills. The greatest achievement of KKI based on observation is at fourth meeting of KKI-3, which is a sub-skill of resume writing of 89%. Allmost students responded positively to the implication of interactive conceptual instruction with multi representation approach. It can be concluded that the implication of interactive conceptual instruction with multi representation approach can optimize the achievement of students’ scientific communication skill on work and energy concept.

  3. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    PubMed Central

    Jevtić, Aleksandar; Gutiérrez, Álvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce. PMID:22346677

  4. Optimization of lamp arrangement in a closed-conduit UV reactor based on a genetic algorithm.

    PubMed

    Sultan, Tipu; Ahmad, Zeshan; Cho, Jinsoo

    2016-01-01

    The choice for the arrangement of the UV lamps in a closed-conduit ultraviolet (CCUV) reactor significantly affects the performance. However, a systematic methodology for the optimal lamp arrangement within the chamber of the CCUV reactor is not well established in the literature. In this research work, we propose a viable systematic methodology for the lamp arrangement based on a genetic algorithm (GA). In addition, we analyze the impacts of the diameter, angle, and symmetry of the lamp arrangement on the reduction equivalent dose (RED). The results are compared based on the simulated RED values and evaluated using the computational fluid dynamics simulations software ANSYS FLUENT. The fluence rate was calculated using commercial software UVCalc3D, and the GA-based lamp arrangement optimization was achieved using MATLAB. The simulation results provide detailed information about the GA-based methodology for the lamp arrangement, the pathogen transport, and the simulated RED values. A significant increase in the RED values was achieved by using the GA-based lamp arrangement methodology. This increase in RED value was highest for the asymmetric lamp arrangement within the chamber of the CCUV reactor. These results demonstrate that the proposed GA-based methodology for symmetric and asymmetric lamp arrangement provides a viable technical solution to the design and optimization of the CCUV reactor.

  5. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  6. Improving Emergency Department flow through optimized bed utilization

    PubMed Central

    Chartier, Lucas Brien; Simoes, Licinia; Kuipers, Meredith; McGovern, Barb

    2016-01-01

    Over the last decade, patient volumes in the emergency department (ED) have grown disproportionately compared to the increase in staffing and resources at the Toronto Western Hospital, an academic tertiary care centre in Toronto, Canada. The resultant congestion has spilled over to the ED waiting room, where medically undifferentiated and potentially unstable patients must wait until a bed becomes available. The aim of this quality improvement project was to decrease the 90th percentile of wait time between triage and bed assignment (time-to-bed) by half, from 120 to 60 minutes, for our highest acuity patients. We engaged key stakeholders to identify barriers and potential strategies to achieve optimal flow of patients into the ED. We first identified multiple flow-interrupting challenges, including operational bottlenecks and cultural issues. We then generated change ideas to address two main underlying causes of ED congestion: unnecessary patient utilization of ED beds and communication breakdown causing bed turnaround delays. We subsequently performed seven tests of change through sequential plan-do-study-act (PDSA) cycles. The most significant gains were made by improving communication strategies: small gains were achieved through the optimization of in-house digital information management systems, while significant improvements were achieved through the implementation of a low-tech direct contact mechanism (a two-way radio or walkie-talkie). In the post-intervention phase, time-to-bed for the 90th percentile of high-acuity patients decreased from 120 minutes to 66 minutes, with special cause variation showing a significant shift in the weekly measurements. PMID:27752312

  7. Improving Emergency Department flow through optimized bed utilization.

    PubMed

    Chartier, Lucas Brien; Simoes, Licinia; Kuipers, Meredith; McGovern, Barb

    2016-01-01

    Over the last decade, patient volumes in the emergency department (ED) have grown disproportionately compared to the increase in staffing and resources at the Toronto Western Hospital, an academic tertiary care centre in Toronto, Canada. The resultant congestion has spilled over to the ED waiting room, where medically undifferentiated and potentially unstable patients must wait until a bed becomes available. The aim of this quality improvement project was to decrease the 90th percentile of wait time between triage and bed assignment (time-to-bed) by half, from 120 to 60 minutes, for our highest acuity patients. We engaged key stakeholders to identify barriers and potential strategies to achieve optimal flow of patients into the ED. We first identified multiple flow-interrupting challenges, including operational bottlenecks and cultural issues. We then generated change ideas to address two main underlying causes of ED congestion: unnecessary patient utilization of ED beds and communication breakdown causing bed turnaround delays. We subsequently performed seven tests of change through sequential plan-do-study-act (PDSA) cycles. The most significant gains were made by improving communication strategies: small gains were achieved through the optimization of in-house digital information management systems, while significant improvements were achieved through the implementation of a low-tech direct contact mechanism (a two-way radio or walkie-talkie). In the post-intervention phase, time-to-bed for the 90th percentile of high-acuity patients decreased from 120 minutes to 66 minutes, with special cause variation showing a significant shift in the weekly measurements.

  8. Progress on Optimizing Miscanthus Biomass Production for the European Bioeconomy: Results of the EU FP7 Project OPTIMISC

    PubMed Central

    Lewandowski, Iris; Clifton-Brown, John; Trindade, Luisa M.; van der Linden, Gerard C.; Schwarz, Kai-Uwe; Müller-Sämann, Karl; Anisimov, Alexander; Chen, C.-L.; Dolstra, Oene; Donnison, Iain S.; Farrar, Kerrie; Fonteyne, Simon; Harding, Graham; Hastings, Astley; Huxley, Laurie M.; Iqbal, Yasir; Khokhlov, Nikolay; Kiesel, Andreas; Lootens, Peter; Meyer, Heike; Mos, Michal; Muylle, Hilde; Nunn, Chris; Özgüven, Mensure; Roldán-Ruiz, Isabel; Schüle, Heinrich; Tarakanov, Ivan; van der Weijde, Tim; Wagner, Moritz; Xi, Qingguo; Kalinina, Olena

    2016-01-01

    This paper describes the complete findings of the EU-funded research project OPTIMISC, which investigated methods to optimize the production and use of miscanthus biomass. Miscanthus bioenergy and bioproduct chains were investigated by trialing 15 diverse germplasm types in a range of climatic and soil environments across central Europe, Ukraine, Russia, and China. The abiotic stress tolerances of a wider panel of 100 germplasm types to drought, salinity, and low temperatures were measured in the laboratory and a field trial in Belgium. A small selection of germplasm types was evaluated for performance in grasslands on marginal sites in Germany and the UK. The growth traits underlying biomass yield and quality were measured to improve regional estimates of feedstock availability. Several potential high-value bioproducts were identified. The combined results provide recommendations to policymakers, growers and industry. The major technical advances in miscanthus production achieved by OPTIMISC include: (1) demonstration that novel hybrids can out-yield the standard commercially grown genotype Miscanthus x giganteus; (2) characterization of the interactions of physiological growth responses with environmental variation within and between sites; (3) quantification of biomass-quality-relevant traits; (4) abiotic stress tolerances of miscanthus genotypes; (5) selections suitable for production on marginal land; (6) field establishment methods for seeds using plugs; (7) evaluation of harvesting methods; and (8) quantification of energy used in densification (pellet) technologies with a range of hybrids with differences in stem wall properties. End-user needs were addressed by demonstrating the potential of optimizing miscanthus biomass composition for the production of ethanol and biogas as well as for combustion. The costs and life-cycle assessment of seven miscanthus-based value chains, including small- and large-scale heat and power, ethanol, biogas, and insulation

  9. Design optimization of piezoresistive cantilevers for force sensing in air and water

    PubMed Central

    Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.

    2009-01-01

    Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512

  10. Optimization of Passive Low Power Wireless Electromagnetic Energy Harvesters

    PubMed Central

    Nimo, Antwi; Grgić, Dario; Reindl, Leonhard M.

    2012-01-01

    This work presents the optimization of antenna captured low power radio frequency (RF) to direct current (DC) power converters using Schottky diodes for powering remote wireless sensors. Linearized models using scattering parameters show that an antenna and a matched diode rectifier can be described as a form of coupled resonator with different individual resonator properties. The analytical models show that the maximum voltage gain of the coupled resonators is mainly related to the antenna, diode and load (remote sensor) resistances at matched conditions or resonance. The analytical models were verified with experimental results. Different passive wireless RF power harvesters offering high selectivity, broadband response and high voltage sensitivity are presented. Measured results show that with an optimal resistance of antenna and diode, it is possible to achieve high RF to DC voltage sensitivity of 0.5 V and efficiency of 20% at −30 dBm antenna input power. Additionally, a wireless harvester (rectenna) is built and tested for receiving range performance. PMID:23202014

  11. Optimization of passive low power wireless electromagnetic energy harvesters.

    PubMed

    Nimo, Antwi; Grgić, Dario; Reindl, Leonhard M

    2012-10-11

    This work presents the optimization of antenna captured low power radio frequency (RF) to direct current (DC) power converters using Schottky diodes for powering remote wireless sensors. Linearized models using scattering parameters show that an antenna and a matched diode rectifier can be described as a form of coupled resonator with different individual resonator properties. The analytical models show that the maximum voltage gain of the coupled resonators is mainly related to the antenna, diode and load (remote sensor) resistances at matched conditions or resonance. The analytical models were verified with experimental results. Different passive wireless RF power harvesters offering high selectivity, broadband response and high voltage sensitivity are presented. Measured results show that with an optimal resistance of antenna and diode, it is possible to achieve high RF to DC voltage sensitivity of 0.5 V and efficiency of 20% at -30 dBm antenna input power. Additionally, a wireless harvester (rectenna) is built and tested for receiving range performance.

  12. Optimal design of multichannel equalizers for the structural similarity index.

    PubMed

    Chai, Li; Sheng, Yuxia

    2014-12-01

    The optimization of multichannel equalizers is studied for the structural similarity (SSIM) criteria. The closed-form formula is provided for the optimal equalizer when the mean of the source is zero. The formula shows that the equalizer with maximal SSIM index is equal to the one with minimal mean square error (MSE) multiplied by a positive real number, which is shown to be equal to the inverse of the achieved SSIM index. The relation of the maximal SSIM index to the minimal MSE is also established for given blurring filters and fixed length equalizers. An algorithm is also presented to compute the suboptimal equalizer for the general sources. Various numerical examples are given to demonstrate the effectiveness of the results.

  13. Optimization and Improvement of Test Processes on a Production Line

    NASA Astrophysics Data System (ADS)

    Sujová, Erika; Čierna, Helena

    2018-06-01

    The paper deals with increasing processes efficiency at a production line of cylinder heads of engines in a production company operating in the automotive industry. The goal is to achieve improvement and optimization of test processes on a production line. It analyzes options for improving capacity, availability and productivity of processes of an output test by using modern technology available on the market. We have focused on analysis of operation times before and after optimization of test processes at specific production sections. By analyzing measured results we have determined differences in time before and after improvement of the process. We have determined a coefficient of efficiency OEE and by comparing outputs we have confirmed real improvement of the process of the output test of cylinder heads.

  14. IPO: a tool for automated optimization of XCMS parameters.

    PubMed

    Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph

    2015-04-16

    Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to

  15. A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.

    PubMed

    Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa

    2018-02-01

    Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.

  16. Enhancement of graphene visibility on transparent substrates by refractive index optimization.

    PubMed

    Gonçalves, Hugo; Alves, Luís; Moura, Cacilda; Belsley, Michael; Stauber, Tobias; Schellenberg, Peter

    2013-05-20

    Optical reflection microscopy is one of the main imaging tools to visualize graphene microstructures. Here is reported a novel method that employs refractive index optimization in an optical reflection microscope, which greatly improves the visibility of graphene flakes. To this end, an immersion liquid with a refractive index that is close to that of the glass support is used in-between the microscope lens and the support improving the contrast and resolution of the sample image. Results show that the contrast of single and few layer graphene crystals and structures can be enhanced by a factor of 4 compared to values commonly achieved with transparent substrates using optical reflection microscopy lacking refractive index optimization.

  17. Power optimization in body sensor networks: the case of an autonomous wireless EMG sensor powered by PV-cells.

    PubMed

    Penders, J; Pop, V; Caballero, L; van de Molengraft, J; van Schaijk, R; Vullers, R; Van Hoof, C

    2010-01-01

    Recent advances in ultra-low-power circuits and energy harvesters are making self-powered body sensor nodes a reality. Power optimization at the system and application level is crucial in achieving ultra-low-power consumption for the entire system. This paper reviews system-level power optimization techniques, and illustrates their impact on the case of autonomous wireless EMG monitoring. The resulting prototype, an Autonomous wireless EMG sensor power by PV-cells, is presented.

  18. Physical activity and academic achievement across the curriculum: Results from a 3-year cluster-randomized trial.

    PubMed

    Donnelly, Joseph E; Hillman, Charles H; Greene, Jerry L; Hansen, David M; Gibson, Cheryl A; Sullivan, Debra K; Poggio, John; Mayo, Matthew S; Lambourne, Kate; Szabo-Reed, Amanda N; Herrmann, Stephen D; Honas, Jeffery J; Scudder, Mark R; Betts, Jessica L; Henley, Katherine; Hunt, Suzanne L; Washburn, Richard A

    2017-06-01

    We compared changes in academic achievement across 3years between children in elementary schools receiving the Academic Achievement and Physical Activity Across the Curriculum intervention (A+PAAC), in which classroom teachers were trained to deliver academic lessons using moderate-to-vigorous physical activity (MVPA) compared to a non-intervention control. Elementary schools in eastern Kansas (n=17) were cluster randomized to A+PAAC (N=9, target ≥100min/week) or control (N=8). Academic achievement (math, reading, spelling) was assessed using the Wechsler Individual Achievement Test-Third Edition (WIAT-III) in a sample of children (A+PAAC=316, Control=268) in grades 2 and 3 at baseline (Fall 2011) and repeated each spring across 3years. On average 55min/week of A+PACC lessons were delivered each week across the intervention. Baseline WIAT-III scores (math, reading, spelling) were significantly higher in students in A+PAAC compared with control schools and improved in both groups across 3years. However, linear mixed modeling, accounting for baseline between group differences in WIAT-III scores, ethnicity, family income, and cardiovascular fitness, found no significant impact of A+PAAC on any of the academic achievement outcomes as determined by non-significant group by time interactions. A+PAAC neither diminished or improved academic achievement across 3-years in elementary school children compared with controls. Our target of 100min/week of active lessons was not achieved; however, students attending A+PAAC schools received an additional 55min/week of MVPA which may be associated with both physical and mental health benefits, without a reduction in time devoted to academic instruction. Copyright © 2017. Published by Elsevier Inc.

  19. Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.

  20. Fast approximation for joint optimization of segmentation, shape, and location priors, and its application in gallbladder segmentation.

    PubMed

    Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu

    2017-05-01

    This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.

  1. Staircase Quantum Dots Configuration in Nanowires for Optimized Thermoelectric Power

    PubMed Central

    Li, Lijie; Jiang, Jian-Hua

    2016-01-01

    The performance of thermoelectric energy harvesters can be improved by nanostructures that exploit inelastic transport processes. One prototype is the three-terminal hopping thermoelectric device where electron hopping between quantum-dots are driven by hot phonons. Such three-terminal hopping thermoelectric devices have potential in achieving high efficiency or power via inelastic transport and without relying on heavy-elements or toxic compounds. We show in this work how output power of the device can be optimized via tuning the number and energy configuration of the quantum-dots embedded in parallel nanowires. We find that the staircase energy configuration with constant energy-step can improve the power factor over a serial connection of a single pair of quantum-dots. Moreover, for a fixed energy-step, there is an optimal length for the nanowire. Similarly for a fixed number of quantum-dots there is an optimal energy-step for the output power. Our results are important for future developments of high-performance nanostructured thermoelectric devices. PMID:27550093

  2. Fast optimization of multipump Raman amplifiers based on a simplified wavelength and power budget heuristic

    NASA Astrophysics Data System (ADS)

    de O. Rocha, Helder R.; Castellani, Carlos E. S.; Silva, Jair A. L.; Pontes, Maria J.; Segatto, Marcelo E. V.

    2015-01-01

    We report a simple budget heuristic for a fast optimization of multipump Raman amplifiers based on the reallocation of the pump wavelengths and the optical powers. A set of different optical fibers are analyzed as the Raman gain medium, and a four-pump amplifier setup is optimized for each of them in order to achieve ripples close to 1 dB and gains up to 20 dB in the C band. Later, a comparison between our proposed heuristic and a multiobjective optimization based on a nondominated sorting genetic algorithm is made, highlighting the fact that our new approach can give similar solutions after at least an order of magnitude fewer iterations. The results shown in this paper can potentially pave the way for real-time optimization of multipump Raman amplifier systems.

  3. Receptor arrays optimized for natural odor statistics.

    PubMed

    Zwicker, David; Murugan, Arvind; Brenner, Michael P

    2016-05-17

    Natural odors typically consist of many molecules at different concentrations. It is unclear how the numerous odorant molecules and their possible mixtures are discriminated by relatively few olfactory receptors. Using an information theoretic model, we show that a receptor array is optimal for this task if it achieves two possibly conflicting goals: (i) Each receptor should respond to half of all odors and (ii) the response of different receptors should be uncorrelated when averaged over odors presented with natural statistics. We use these design principles to predict statistics of the affinities between receptors and odorant molecules for a broad class of odor statistics. We also show that optimal receptor arrays can be tuned to either resolve concentrations well or distinguish mixtures reliably. Finally, we use our results to predict properties of experimentally measured receptor arrays. Our work can thus be used to better understand natural olfaction, and it also suggests ways to improve artificial sensor arrays.

  4. Optimization design of wireless charging system for autonomous robots based on magnetic resonance coupling

    NASA Astrophysics Data System (ADS)

    Wang, Junhua; Hu, Meilin; Cai, Changsong; Lin, Zhongzheng; Li, Liang; Fang, Zhijian

    2018-05-01

    Wireless charging is the key technology to realize real autonomy of mobile robots. As the core part of wireless power transfer system, coupling mechanism including coupling coils and compensation topology is analyzed and optimized through simulations, to achieve stable and practical wireless charging suitable for ordinary robots. Multi-layer coil structure, especially double-layer coil is explored and selected to greatly enhance coupling performance, while shape of ferrite shielding goes through distributed optimization to guarantee coil fault tolerance and cost effectiveness. On the basis of optimized coils, primary compensation topology is analyzed to adopt composite LCL compensation, to stabilize operations of the primary side under variations of mutual inductance. Experimental results show the optimized system does make sense for wireless charging application for robots based on magnetic resonance coupling, to realize long-term autonomy of robots.

  5. Distributed Optimization Design of Continuous-Time Multiagent Systems With Unknown-Frequency Disturbances.

    PubMed

    Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu

    2017-05-24

    In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.

  6. Detailed design of a lattice composite fuselage structure by a mixed optimization method

    NASA Astrophysics Data System (ADS)

    Liu, D.; Lohse-Busch, H.; Toropov, V.; Hühne, C.; Armani, U.

    2016-10-01

    In this article, a procedure for designing a lattice fuselage barrel is developed. It comprises three stages: first, topology optimization of an aircraft fuselage barrel is performed with respect to weight and structural performance to obtain the conceptual design. The interpretation of the optimal result is given to demonstrate the development of this new lattice airframe concept for the fuselage barrel. Subsequently, parametric optimization of the lattice aircraft fuselage barrel is carried out using genetic algorithms on metamodels generated with genetic programming from a 101-point optimal Latin hypercube design of experiments. The optimal design is achieved in terms of weight savings subject to stability, global stiffness and strain requirements, and then verified by the fine mesh finite element simulation of the lattice fuselage barrel. Finally, a practical design of the composite skin complying with the aircraft industry lay-up rules is presented. It is concluded that the mixed optimization method, combining topology optimization with the global metamodel-based approach, allows the problem to be solved with sufficient accuracy and provides the designers with a wealth of information on the structural behaviour of the novel anisogrid composite fuselage design.

  7. Investigation on the optimal magnetic field of a cusp electron gun for a W-band gyro-TWA

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; He, Wenlong; Donaldson, Craig R.; Cross, Adrian W.

    2018-05-01

    High efficiency and broadband operation of a gyrotron traveling wave amplifier (gyro-TWA) require a high-quality electron beam with low-velocity spreads. The beam velocity spreads are mainly due to the differences of the electric and magnetic fields that the electrons withstand the electron gun. This paper investigates the possibility to decouple the design of electron gun geometry and the magnet system while still achieving optimal results, through a case study of designing a cusp electron gun for a W-band gyro-TWA. A global multiple-objective optimization routing was used to optimize the electron gun geometry for different predefined magnetic field profiles individually. Their results were compared and the properties of the required magnetic field profile are summarized.

  8. Optimized Clustering Estimators for BAO Measurements Accounting for Significant Redshift Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Ashley J.; Banik, Nilanjan; Avila, Santiago

    2017-05-15

    We determine an optimized clustering statistic to be used for galaxy samples with significant redshift uncertainty, such as those that rely on photometric redshifts. To do so, we study the BAO information content as a function of the orientation of galaxy clustering modes with respect to their angle to the line-of-sight (LOS). The clustering along the LOS, as observed in a redshift-space with significant redshift uncertainty, has contributions from clustering modes with a range of orientations with respect to the true LOS. For redshift uncertaintymore » $$\\sigma_z \\geq 0.02(1+z)$$ we find that while the BAO information is confined to transverse clustering modes in the true space, it is spread nearly evenly in the observed space. Thus, measuring clustering in terms of the projected separation (regardless of the LOS) is an efficient and nearly lossless compression of the signal for $$\\sigma_z \\geq 0.02(1+z)$$. For reduced redshift uncertainty, a more careful consideration is required. We then use more than 1700 realizations of galaxy simulations mimicking the Dark Energy Survey Year 1 sample to validate our analytic results and optimized analysis procedure. We find that using the correlation function binned in projected separation, we can achieve uncertainties that are within 10 per cent of of those predicted by Fisher matrix forecasts. We predict that DES Y1 should achieve a 5 per cent distance measurement using our optimized methods. We expect the results presented here to be important for any future BAO measurements made using photometric redshift data.« less

  9. Optimal Control Allocation with Load Sensor Feedback for Active Load Suppression, Experiment Development

    NASA Technical Reports Server (NTRS)

    Miller, Christopher J.; Goodrick, Dan

    2017-01-01

    The problem of control command and maneuver induced structural loads is an important aspect of any control system design. The aircraft structure and the control architecture must be designed to achieve desired piloted control responses while limiting the imparted structural loads. The classical approach is to utilize high structural margins, restrict control surface commands to a limited set of analyzed combinations, and train pilots to follow procedural maneuvering limitations. With recent advances in structural sensing and the continued desire to improve safety and vehicle fuel efficiency, it is both possible and desirable to develop control architectures that enable lighter vehicle weights while maintaining and improving protection against structural damage. An optimal control technique has been explored and shown to achieve desirable vehicle control performance while limiting sensed structural loads. The subject of this paper is the design of the optimal control architecture, and provides the reader with some techniques for tailoring the architecture, along with detailed simulation results.

  10. A self-contained, automated methodology for optimal flow control validated for transition delay

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Gunzburger, Max D.; Nicolaides, R. A.; Erlebacher, Gordon; Hussaini, M. Yousuff

    1995-01-01

    This paper describes a self-contained, automated methodology for flow control along with a validation of the methodology for the problem of boundary layer instability suppression. The objective of control is to match the stress vector along a portion of the boundary to a given vector; instability suppression is achieved by choosing the given vector to be that of a steady base flow, e.g., Blasius boundary layer. Control is effected through the injection or suction of fluid through a single orifice on the boundary. The present approach couples the time-dependent Navier-Stokes system with an adjoint Navier-Stokes system and optimality conditions from which optimal states, i.e., unsteady flow fields, and control, e.g., actuators, may be determined. The results demonstrate that instability suppression can be achieved without any a priori knowledge of the disturbance, which is significant because other control techniques have required some knowledge of the flow unsteadiness such as frequencies, instability type, etc.

  11. Optimization of topological quantum algorithms using Lattice Surgery is hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon

    The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.

  12. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  13. Optimization of the sources in local hyperthermia using a combined finite element-genetic algorithm method.

    PubMed

    Siauve, N; Nicolas, L; Vollaire, C; Marchal, C

    2004-12-01

    This article describes an optimization process specially designed for local and regional hyperthermia in order to achieve the desired specific absorption rate in the patient. It is based on a genetic algorithm coupled to a finite element formulation. The optimization method is applied to real human organs meshes assembled from computerized tomography scans. A 3D finite element formulation is used to calculate the electromagnetic field in the patient, achieved by radiofrequency or microwave sources. Space discretization is performed using incomplete first order edge elements. The sparse complex symmetric matrix equation is solved using a conjugate gradient solver with potential projection pre-conditionning. The formulation is validated by comparison of calculated specific absorption rate distributions in a phantom to temperature measurements. A genetic algorithm is used to optimize the specific absorption rate distribution to predict the phases and amplitudes of the sources leading to the best focalization. The objective function is defined as the specific absorption rate ratio in the tumour and healthy tissues. Several constraints, regarding the specific absorption rate in tumour and the total power in the patient, may be prescribed. Results obtained with two types of applicators (waveguides and annular phased array) are presented and show the faculties of the developed optimization process.

  14. Improving of the working process of axial compressors of gas turbine engines by using an optimization method

    NASA Astrophysics Data System (ADS)

    Marchukov, E.; Egorov, I.; Popov, G.; Baturin, O.; Goriachkin, E.; Novikova, Y.; Kolmakova, D.

    2017-08-01

    The article presents one optimization method for improving of the working process of an axial compressor of gas turbine engine. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. Optimization was performed by changing the form of the middle line in the three sections of each blade and shifts of three sections of the guide vanes in the circumferential and axial directions. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.

  15. SU-E-I-43: Pediatric CT Dose and Image Quality Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, G; Singh, R

    2014-06-01

    Purpose: To design an approach to optimize radiation dose and image quality for pediatric CT imaging, and to evaluate expected performance. Methods: A methodology was designed to quantify relative image quality as a function of CT image acquisition parameters. Image contrast and image noise were used to indicate expected conspicuity of objects, and a wide-cone system was used to minimize scan time for motion avoidance. A decision framework was designed to select acquisition parameters as a weighted combination of image quality and dose. Phantom tests were used to acquire images at multiple techniques to demonstrate expected contrast, noise and dose.more » Anthropomorphic phantoms with contrast inserts were imaged on a 160mm CT system with tube voltage capabilities as low as 70kVp. Previously acquired clinical images were used in conjunction with simulation tools to emulate images at different tube voltages and currents to assess human observer preferences. Results: Examination of image contrast, noise, dose and tube/generator capabilities indicates a clinical task and object-size dependent optimization. Phantom experiments confirm that system modeling can be used to achieve the desired image quality and noise performance. Observer studies indicate that clinical utilization of this optimization requires a modified approach to achieve the desired performance. Conclusion: This work indicates the potential to optimize radiation dose and image quality for pediatric CT imaging. In addition, the methodology can be used in an automated parameter selection feature that can suggest techniques given a limited number of user inputs. G Stevens and R Singh are employees of GE Healthcare.« less

  16. Socially optimal electric driving range of plug-in hybrid electric vehicles

    DOE PAGES

    Kontou, Eleftheria; Yin, Yafeng; Lin, Zhenhong

    2015-07-25

    Our study determines the optimal electric driving range of plug-in hybrid electric vehicles (PHEVs) that minimizes the daily cost borne by the society when using this technology. An optimization framework is developed and applied to datasets representing the US market. Results indicate that the optimal range is 16 miles with an average social cost of 3.19 per day when exclusively charging at home, compared to 3.27 per day of driving a conventional vehicle. The optimal range is found to be sensitive to the cost of battery packs and the price of gasoline. Moreover, when workplace charging is available, the optimalmore » electric driving range surprisingly increases from 16 to 22 miles, as larger batteries would allow drivers to better take advantage of the charging opportunities to achieve longer electrified travel distances, yielding social cost savings. If workplace charging is available, the optimal density is to deploy a workplace charger for every 3.66 vehicles. Finally, the diversification of the battery size, i.e., introducing a pair and triple of electric driving ranges to the market, could further decrease the average societal cost per PHEV by 7.45% and 11.5% respectively.« less

  17. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  18. Quaternion error-based optimal control applied to pinpoint landing

    NASA Astrophysics Data System (ADS)

    Ghiglino, Pablo

    Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.

  19. Superscattering of light optimized by a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mirzaei, Ali; Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.

    2014-07-01

    We analyse scattering of light from multi-layer plasmonic nanowires and employ a genetic algorithm for optimizing the scattering cross section. We apply the mode-expansion method using experimental data for material parameters to demonstrate that our genetic algorithm allows designing realistic core-shell nanostructures with the superscattering effect achieved at any desired wavelength. This approach can be employed for optimizing both superscattering and cloaking at different wavelengths in the visible spectral range.

  20. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    PubMed Central

    2012-01-01

    Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base. PMID:23126469