Technology for Building Systems Integration and Optimization – Landscape Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goetzler, William; Guernsey, Matt; Bargach, Youssef
BTO's Commercial Building Integration (CBI) program helps advance a range of innovative building integration and optimization technologies and solutions, paving the way for high-performing buildings that could use 50-70% less energy than typical buildings. CBI’s work focuses on early stage technology innovation, with an emphasis on how components and systems work together and how whole buildings are integrated and optimized. This landscape study outlines the current body of knowledge, capabilities, and the broader array of solutions supporting integration and optimization in commercial buildings. CBI seeks to support solutions for both existing buildings and new construction, which often present very differentmore » challenges.« less
Shi, Ya-jun; Shi, Jun-hui; Chen, Shi-bin; Yang, Ming
2015-07-01
Based on the demand of nasal drug delivery high drug loadings, using the unique phase transfer of solute, integrating the phospholipid complex preparation and submicron emulsion molding process of Scutellariae Radix extract, the study obtained the preparation of the high drug loadings submicron emulsion of Scutellariae Radix extract. In the study of drug solution dispersion method, the uniformity of drug dispersed as the evaluation index, the traditional mixing method, grinding, homogenate and solute phase transfer technology were investigated, and the solute phase transfer technology was adopted in the last. With the adoption of new technology, the drug loading capacity reached 1.33% (phospholipid complex was 4%). The drug loading capacity was improved significantly. The transfer of solute method and timing were studied as follows,join the oil phase when the volume of phospholipid complex anhydrous ethanol solution remaining 30%, the solute phase transfer was completed with the continued recycling of anhydrous ethanol. After drug dissolved away to oil phase, the preparation technology of colostrum was determined with the evaluation index of emulsion droplet form. The particle size of submicron emulsion, PDI and stability parameters were used as evaluation index, orthogonal methodology were adopted to optimize the submicron emulsion ingredient and main influential factors of high pressure homogenization technology. The optimized preparation technology of Scutellariae Radix extract nasal submicron emulsion is practical and stable.
Integrated Solutions for a Complex Energy World - Continuum Magazine |
NREL Integrated Solutions for a Complex Energy World Integrated Solutions for a Complex Energy World Energy systems integration optimizes electrical, thermal, fuel, and data technologies design and performance. An array of clean energy technologies, including wind, solar, and electric vehicle batteries, is
Optimal Wastewater Loading under Conflicting Goals and Technology Limitations in a Riverine System.
Rafiee, Mojtaba; Lyon, Steve W; Zahraie, Banafsheh; Destouni, Georgia; Jaafarzadeh, Nemat
2017-03-01
This paper investigates a novel simulation-optimization (S-O) framework for identifying optimal treatment levels and treatment processes for multiple wastewater dischargers to rivers. A commonly used water quality simulation model, Qual2K, was linked to a Genetic Algorithm optimization model for exploration of relevant fuzzy objective-function formulations for addressing imprecision and conflicting goals of pollution control agencies and various dischargers. Results showed a dynamic flow dependence of optimal wastewater loading with good convergence to near global optimum. Explicit considerations of real-world technological limitations, which were developed here in a new S-O framework, led to better compromise solutions between conflicting goals than those identified within traditional S-O frameworks. The newly developed framework, in addition to being more technologically realistic, is also less complicated and converges on solutions more rapidly than traditional frameworks. This technique marks a significant step forward for development of holistic, riverscape-based approaches that balance the conflicting needs of the stakeholders.
Technology in rural transportation. Simple solution #9, transportation operations optimization
DOT National Transportation Integrated Search
1997-01-01
This application was identified as a promising rural Intelligent Transportation Systems (ITS) solution under a project sponsored by the Federal Highway Administration (FHWA) and the ENTERPRISE program. This summary describes the solution as well as o...
Algorithms for synthesizing management solutions based on OLAP-technologies
NASA Astrophysics Data System (ADS)
Pishchukhin, A. M.; Akhmedyanova, G. F.
2018-05-01
OLAP technologies are a convenient means of analyzing large amounts of information. An attempt was made in their work to improve the synthesis of optimal management decisions. The developed algorithms allow forecasting the needs and accepted management decisions on the main types of the enterprise resources. Their advantage is the efficiency, based on the simplicity of quadratic functions and differential equations of only the first order. At the same time, the optimal redistribution of resources between different types of products from the assortment of the enterprise is carried out, and the optimal allocation of allocated resources in time. The proposed solutions can be placed on additional specially entered coordinates of the hypercube representing the data warehouse.
Moreno, Rodrigo; Street, Alexandre; Arroyo, José M; Mancarella, Pierluigi
2017-08-13
Electricity grid operators and planners need to deal with both the rapidly increasing integration of renewables and an unprecedented level of uncertainty that originates from unknown generation outputs, changing commercial and regulatory frameworks aimed to foster low-carbon technologies, the evolving availability of market information on feasibility and costs of various technologies, etc. In this context, there is a significant risk of locking-in to inefficient investment planning solutions determined by current deterministic engineering practices that neither capture uncertainty nor represent the actual operation of the planned infrastructure under high penetration of renewables. We therefore present an alternative optimization framework to plan electricity grids that deals with uncertain scenarios and represents increased operational details. The presented framework is able to model the effects of an array of flexible, smart grid technologies that can efficiently displace the need for conventional solutions. We then argue, and demonstrate via the proposed framework and an illustrative example, that proper modelling of uncertainty and operational constraints in planning is key to valuing operationally flexible solutions leading to optimal investment in a smart grid context. Finally, we review the most used practices in power system planning under uncertainty, highlight the challenges of incorporating operational aspects and advocate the need for new and computationally effective optimization tools to properly value the benefits of flexible, smart grid solutions in planning. Such tools are essential to accelerate the development of a low-carbon energy system and investment in the most appropriate portfolio of renewable energy sources and complementary enabling smart technologies.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
NASA Technical Reports Server (NTRS)
Berke, J. G.
1971-01-01
The organization and functions of an interdisciplinary team for the application of aerospace generated technology to the solution of discrete technological problems within the public sector are presented. The interdisciplinary group formed at Stanford Research Institute, California is discussed. The functions of the group are to develop and conduct a program not only optimizing the match between public sector technological problems in criminalistics, transportation, and the postal services and potential solutions found in the aerospace data base, but ensuring that appropriate solutions are acutally utilized. The work accomplished during the period from July 1, 1970 to June 30, 1971 is reported.
A gradient system solution to Potts mean field equations and its electronic implementation.
Urahama, K; Ueno, S
1993-03-01
A gradient system solution method is presented for solving Potts mean field equations for combinatorial optimization problems subject to winner-take-all constraints. In the proposed solution method the optimum solution is searched by using gradient descent differential equations whose trajectory is confined within the feasible solution space of optimization problems. This gradient system is proven theoretically to always produce a legal local optimum solution of combinatorial optimization problems. An elementary analog electronic circuit implementing the presented method is designed on the basis of current-mode subthreshold MOS technologies. The core constituent of the circuit is the winner-take-all circuit developed by Lazzaro et al. Correct functioning of the presented circuit is exemplified with simulations of the circuits implementing the scheme for solving the shortest path problems.
Pricing policy for declining demand using item preservation technology.
Khedlekar, Uttam Kumar; Shukla, Diwakar; Namdeo, Anubhav
2016-01-01
We have designed an inventory model for seasonal products in which deterioration can be controlled by item preservation technology investment. Demand for the product is considered price sensitive and decreases linearly. This study has shown that the profit is a concave function of optimal selling price, replenishment time and preservation cost parameter. We simultaneously determined the optimal selling price of the product, the replenishment cycle and the cost of item preservation technology. Additionally, this study has shown that there exists an optimal selling price and optimal preservation investment to maximize the profit for every business set-up. Finally, the model is illustrated by numerical examples and sensitive analysis of the optimal solution with respect to major parameters.
Postoptimality analysis in the selection of technology portfolios
NASA Technical Reports Server (NTRS)
Adumitroaie, Virgil; Shelton, Kacie; Elfes, Alberto; Weisbin, Charles R.
2006-01-01
This paper describes an approach for qualifying optimal technology portfolios obtained with a multi-attribute decision support system. The goal is twofold: to gauge the degree of confidence in the optimal solution and to provide the decision-maker with an array of viable selection alternatives, which take into account input uncertainties and possibly satisfy non-technical constraints.
Postoptimality Analysis in the Selection of Technology Portfolios
NASA Technical Reports Server (NTRS)
Adumitroaie, Virgil; Shelton, Kacie; Elfes, Alberto; Weisbin, Charles R.
2006-01-01
This slide presentation reviews a process of postoptimally analysing the selection of technology portfolios. The rationale for the analysis stems from the need for consistent, transparent and auditable decision making processes and tools. The methodology is used to assure that project investments are selected through an optimization of net mission value. The main intent of the analysis is to gauge the degree of confidence in the optimal solution and to provide the decision maker with an array of viable selection alternatives which take into account input uncertainties and possibly satisfy non-technical constraints. A few examples of the analysis are reviewed. The goal of the postoptimality study is to enhance and improve the decision-making process by providing additional qualifications and substitutes to the optimal solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scioletti, Michael S.; Newman, Alexandra M.; Goodman, Johanna K.
Renewable energy technologies, specifically, solar photovoltaic cells, combined with battery storage and diesel generators, form a hybrid system capable of independently powering remote locations, i.e., those isolated from larger grids. If sized correctly, hybrid systems reduce fuel consumption compared to diesel generator-only alternatives. We present an optimization model for establishing a hybrid power design and dispatch strategy for remote locations, such as a military forward operating base, that models the acquisition of different power technologies as integer variables and their operation using nonlinear expressions. Our cost-minimizing, nonconvex, mixed-integer, nonlinear program contains a detailed battery model. Due to its complexities, wemore » present linearizations, which include exact and convex under-estimation techniques, and a heuristic, which determines an initial feasible solution to serve as a “warm start” for the solver. We determine, in a few hours at most, solutions within 5% of optimality for a candidate set of technologies; these solutions closely resemble those from the nonlinear model. Lastly, our instances contain real data spanning a yearly horizon at hour fidelity and demonstrate that a hybrid system could reduce fuel consumption by as much as 50% compared to a generator-only solution.« less
Scioletti, Michael S.; Newman, Alexandra M.; Goodman, Johanna K.; ...
2017-05-08
Renewable energy technologies, specifically, solar photovoltaic cells, combined with battery storage and diesel generators, form a hybrid system capable of independently powering remote locations, i.e., those isolated from larger grids. If sized correctly, hybrid systems reduce fuel consumption compared to diesel generator-only alternatives. We present an optimization model for establishing a hybrid power design and dispatch strategy for remote locations, such as a military forward operating base, that models the acquisition of different power technologies as integer variables and their operation using nonlinear expressions. Our cost-minimizing, nonconvex, mixed-integer, nonlinear program contains a detailed battery model. Due to its complexities, wemore » present linearizations, which include exact and convex under-estimation techniques, and a heuristic, which determines an initial feasible solution to serve as a “warm start” for the solver. We determine, in a few hours at most, solutions within 5% of optimality for a candidate set of technologies; these solutions closely resemble those from the nonlinear model. Lastly, our instances contain real data spanning a yearly horizon at hour fidelity and demonstrate that a hybrid system could reduce fuel consumption by as much as 50% compared to a generator-only solution.« less
Solid state light engines for bioanalytical instruments and biomedical devices
NASA Astrophysics Data System (ADS)
Jaffe, Claudia B.; Jaffe, Steven M.
2010-02-01
Lighting subsystems to drive 21st century bioanalysis and biomedical diagnostics face stringent requirements. Industrywide demands for speed, accuracy and portability mean illumination must be intense as well as spectrally pure, switchable, stable, durable and inexpensive. Ideally a common lighting solution could service these needs for numerous research and clinical applications. While this is a noble objective, the current technology of arc lamps, lasers, LEDs and most recently light pipes have intrinsic spectral and angular traits that make a common solution untenable. Clearly a hybrid solution is required to service the varied needs of the life sciences. Any solution begins with a critical understanding of the instrument architecture and specifications for illumination regarding power, illumination area, illumination and emission wavelengths and numerical aperture. Optimizing signal to noise requires careful optimization of these parameters within the additional constraints of instrument footprint and cost. Often the illumination design process is confined to maximizing signal to noise without the ability to adjust any of the above parameters. A hybrid solution leverages the best of the existing lighting technologies. This paper will review the design process for this highly constrained, but typical optical optimization scenario for numerous bioanalytical instruments and biomedical devices.
Zhao, Jane Y.; Song, Buer; Anand, Edwin; Schwartz, Diane; Panesar, Mandip; Jackson, Gretchen P.; Elkin, Peter L.
2017-01-01
Patient portal and personal health record adoption and usage rates have been suboptimal. A systematic review of the literature was performed to capture all published studies that specifically addressed barriers, facilitators, and solutions to optimal patient portal and personal health record enrollment and use. Consistent themes emerged from the review. Patient attitudes were critical as either barrier or facilitator. Institutional buy-in, information technology support, and aggressive tailored marketing were important facilitators. Interface redesign was a popular solution. Quantitative studies identified many barriers to optimal patient portal and personal health record enrollment and use, and qualitative and mixed methods research revealed thoughtful explanations for why they existed. Our study demonstrated the value of qualitative and mixed research methodologies in understanding the adoption of consumer health technologies. Results from the systematic review should be used to guide the design and implementation of future patient portals and personal health records, and ultimately, close the digital divide. PMID:29854263
Zhang, Litao; Cvijic, Mary Ellen; Lippy, Jonathan; Myslik, James; Brenner, Stephen L; Binnie, Alastair; Houston, John G
2012-07-01
In this paper, we review the key solutions that enabled evolution of the lead optimization screening support process at Bristol-Myers Squibb (BMS) between 2004 and 2009. During this time, technology infrastructure investment and scientific expertise integration laid the foundations to build and tailor lead optimization screening support models across all therapeutic groups at BMS. Together, harnessing advanced screening technology platforms and expanding panel screening strategy led to a paradigm shift at BMS in supporting lead optimization screening capability. Parallel SAR and structure liability relationship (SLR) screening approaches were first and broadly introduced to empower more-rapid and -informed decisions about chemical synthesis strategy and to broaden options for identifying high-quality drug candidates during lead optimization. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lightweight Steel Solutions for Automotive Industry
NASA Astrophysics Data System (ADS)
Lee, Hong Woo; Kim, Gyosung; Park, Sung Ho
2010-06-01
Recently, improvement in fuel efficiency and safety has become the biggest issue in worldwide automotive industry. Although the regulation of environment and safety has been tightened up more and more, the majority of vehicle bodies are still manufactured from stamped steel components. This means that the optimized steel solutions enable to demonstrate its ability to reduce body weight with high crashworthiness performance instead of expensive light weight materials such as Al, Mg and composites. To provide the innovative steel solutions for automotive industry, POSCO has developed AHSS and its application technologies, which is directly connected to EVI activities. EVI is a technical cooperation program with customer covering all stages of new car project from design to mass production. Integrated light weight solutions through new forming technologies such as TWB, hydroforming and HPF are continuously developed and provided for EVI activities. This paper will discuss the detailed status of these technologies especially light weight steel solutions based on innovative technologies.
HOMER® Micropower Optimization Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lilienthal, P.
2005-01-01
NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.
Research on optimal path planning algorithm of task-oriented optical remote sensing satellites
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Xu, Shengli; Liu, Fengjing; Yuan, Jingpeng
2015-08-01
GEO task-oriented optical remote sensing satellite, is very suitable for long-term continuous monitoring and quick access to imaging. With the development of high resolution optical payload technology and satellite attitude control technology, GEO optical remote sensing satellites will become an important developing trend for aerospace remote sensing satellite in the near future. In the paper, we focused on GEO optical remote sensing satellite plane array stare imaging characteristics and real-time leading mission of earth observation mode, targeted on satisfying needs of the user with the minimum cost of maneuver, and put forward the optimal path planning algorithm centered on transformation from geographic coordinate space to Field of plane, and finally reduced the burden of the control system. In this algorithm, bounded irregular closed area on the ground would be transformed based on coordinate transformation relations in to the reference plane for field of the satellite payload, and then using the branch and bound method to search for feasible solutions, cutting off the non-feasible solution in the solution space based on pruning strategy; and finally trimming some suboptimal feasible solutions based on the optimization index until a feasible solution for the global optimum. Simulation and visualization presentation software testing results verified the feasibility and effectiveness of the strategy.
Process optimization electrospinning fibrous material based on polyhydroxybutyrate
NASA Astrophysics Data System (ADS)
Olkhov, A. A.; Tyubaeva, P. M.; Staroverova, O. V.; Mastalygina, E. E.; Popov, A. A.; Ischenko, A. A.; Iordanskii, A. L.
2016-05-01
The article analyzes the influence of the main technological parameters of electrostatic spinning on the morphology and properties of ultrathin fibers on the basis of polyhydroxybutyrate. It is found that the electric conductivity and viscosity of the spinning solution affects the process of forming fibers macrostructure. The fiber-based materials PHB lets control geometry and optimize the viscosity and conductivity of a spinning solution. The resulting fibers have found use in medicine, particularly in the construction elements musculoskeletal.
Sun, J; Wang, T; Li, Z D; Shao, Y; Zhang, Z Y; Feng, H; Zou, D H; Chen, Y J
2017-12-01
To reconstruct a vehicle-bicycle-cyclist crash accident and analyse the injuries using 3D laser scanning technology, multi-rigid-body dynamics and optimized genetic algorithm, and to provide biomechanical basis for the forensic identification of death cause. The vehicle was measured by 3D laser scanning technology. The multi-rigid-body models of cyclist, bicycle and vehicle were developed based on the measurements. The value range of optimal variables was set. A multi-objective genetic algorithm and the nondominated sorting genetic algorithm were used to find the optimal solutions, which were compared to the record of the surveillance video around the accident scene. The reconstruction result of laser scanning on vehicle was satisfactory. In the optimal solutions found by optimization method of genetic algorithm, the dynamical behaviours of dummy, bicycle and vehicle corresponded to that recorded by the surveillance video. The injury parameters of dummy were consistent with the situation and position of the real injuries on the cyclist in accident. The motion status before accident, damage process by crash and mechanical analysis on the injury of the victim can be reconstructed using 3D laser scanning technology, multi-rigid-body dynamics and optimized genetic algorithm, which have application value in the identification of injury manner and analysis of death cause in traffic accidents. Copyright© by the Editorial Department of Journal of Forensic Medicine
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
I-STEM Ed Exemplar: Implementation of the PIRPOSAL Model
ERIC Educational Resources Information Center
Wells, John G.
2016-01-01
The opening pages of the first PIRPOSAL (Problem Identification, Ideation, Research, Potential Solutions, Optimization, Solution Evaluation, Alterations, and Learned Outcomes) article make the case that the instructional models currently used in K-12 Science, Technology, Engineering, and Mathematics (STEM) Education fall short of conveying their…
Zhao, Hui-ru; Ren, Zao; Liu, Chun-ye
2015-04-01
To compare the purification effect of saponins from Ziziphi Spinosae Semen with different types of macroporous adsorption resin, and to optimize its purification technology. The type of macroporous resins was optimized by static adsorption method. The optimum technological conditions of saponins from Ziziphi Spinosae Semen was screened by single factor test and Box-Behnken Design-Response Surface Methodology. AB-8 macroporous resin had better purification effect of total saponins than other resins, optimum technological parameters were as follows: column height-diameter ratio was 5: 1, the concentration of sample solution was 2. 52 mg/mL, resin adsorption quantity was 8. 915 mg/g, eluted by 3 BV water, flow rate of adsorption and elution was 2 BV/h, elution solvent was 75% ethanol, elution solvent volume was 5 BV. AB-8 macroporous resin has a good purification effect on jujuboside A. The optimized technology is stable and feasible.
Lightweight Steel Solutions for Automotive Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hong Woo; Kim, Gyosung; Park, Sung Ho
2010-06-15
Recently, improvement in fuel efficiency and safety has become the biggest issue in worldwide automotive industry. Although the regulation of environment and safety has been tightened up more and more, the majority of vehicle bodies are still manufactured from stamped steel components. This means that the optimized steel solutions enable to demonstrate its ability to reduce body weight with high crashworthiness performance instead of expensive light weight materials such as Al, Mg and composites. To provide the innovative steel solutions for automotive industry, POSCO has developed AHSS and its application technologies, which is directly connected to EVI activities. EVI ismore » a technical cooperation program with customer covering all stages of new car project from design to mass production. Integrated light weight solutions through new forming technologies such as TWB, hydroforming and HPF are continuously developed and provided for EVI activities. This paper will discuss the detailed status of these technologies especially light weight steel solutions based on innovative technologies.« less
The technological raw material heating furnaces operation efficiency improving issue
NASA Astrophysics Data System (ADS)
Paramonov, A. M.
2017-08-01
The issue of fuel oil applying efficiency improving in the technological raw material heating furnaces by means of its combustion intensification is considered in the paper. The technical and economic optimization problem of the fuel oil heating before combustion is solved. The fuel oil heating optimal temperature defining method and algorithm analytically considering the correlation of thermal, operating parameters and discounted costs for the heating furnace were developed. The obtained optimization functionality provides the heating furnace appropriate thermal indices achievement at minimum discounted costs. The carried out research results prove the expediency of the proposed solutions using.
Dynamic modeling and optimization for space logistics using time-expanded networks
NASA Astrophysics Data System (ADS)
Ho, Koki; de Weck, Olivier L.; Hoffman, Jeffrey A.; Shishko, Robert
2014-12-01
This research develops a dynamic logistics network formulation for lifecycle optimization of mission sequences as a system-level integrated method to find an optimal combination of technologies to be used at each stage of the campaign. This formulation can find the optimal transportation architecture considering its technology trades over time. The proposed methodologies are inspired by the ground logistics analysis techniques based on linear programming network optimization. Particularly, the time-expanded network and its extension are developed for dynamic space logistics network optimization trading the quality of the solution with the computational load. In this paper, the methodologies are applied to a human Mars exploration architecture design problem. The results reveal multiple dynamic system-level trades over time and give recommendation of the optimal strategy for the human Mars exploration architecture. The considered trades include those between In-Situ Resource Utilization (ISRU) and propulsion technologies as well as the orbit and depot location selections over time. This research serves as a precursor for eventual permanent settlement and colonization of other planets by humans and us becoming a multi-planet species.
NASA Astrophysics Data System (ADS)
Bilal, Bisma; Ahmed, Suhaib; Kakkar, Vipan
2018-02-01
The challenges which the CMOS technology is facing toward the end of the technology roadmap calls for an investigation of various logical and technological solutions to CMOS at the nano scale. Two such paradigms which are considered in this paper are the reversible logic and the quantum-dot cellular automata (QCA) nanotechnology. Firstly, a new 3 × 3 reversible and universal gate, RG-QCA, is proposed and implemented in QCA technology using conventional 3-input majority voter based logic. Further the gate is optimized by using explicit interaction of cells and this optimized gate is then used to design an optimized modular full adder in QCA. Another configuration of RG-QCA gate, CRG-QCA, is then proposed which is a 4 × 4 gate and includes the fault tolerant characteristics and parity preserving nature. The proposed CRG-QCA gate is then tested to design a fault tolerant full adder circuit. Extensive comparisons of gate and adder circuits are drawn with the existing literature and it is envisaged that our proposed designs perform better and are cost efficient in QCA technology.
Mallam, Steven C; Lundh, Monica; MacKinnon, Scott N
2017-03-01
Computer-aided solutions are essential for naval architects to manage and optimize technical complexities when developing a ship's design. Although there are an array of software solutions aimed to optimize the human element in design, practical ergonomics methodologies and technological solutions have struggled to gain widespread application in ship design processes. This paper explores how a new ergonomics technology is perceived by naval architecture students using a mixed-methods framework. Thirteen Naval Architecture and Ocean Engineering Masters students participated in the study. Overall, results found participants perceived the software and its embedded ergonomics tools to benefit their design work, increasing their empathy and ability to understand the work environment and work demands end-users face. However, participant's questioned if ergonomics could be practically and efficiently implemented under real-world project constraints. This revealed underlying social biases and a fundamental lack of understanding in engineering postgraduate students regarding applied ergonomics in naval architecture. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Koneva, M. S.; Rudenko, O. V.; Usatikov, S. V.; Bugaets, N. A.; Tereshchenko, I. V.
2018-05-01
To reduce the duration of the process and to ensure the microbiological purity of the germinated material, an improved method of germination has been developed based on the complex use of physical factors: electrochemically activated water (ECHA-water), electromagnetic field of extremely low frequencies (EMF ELF) with round-the-clock artificial illumination by LED lamps. The increase in the efficiency of the "numerical" technology for solving computational problems of parametric optimization of the technological process of hydroponic germination of wheat grains is considered. In this situation, the quality criteria are contradictory and part of them is given by implicit functions of many variables. A solution algorithm is offered without the construction of a Pareto set in which a relatively small number of elements of a set of alternatives is used to obtain a linear convolution of the criteria with given weights, normalized to their "ideal" values from the solution of the problems of single-criterion private optimizations. The use of the proposed mathematical models describing the processes of hydroponic germination of wheat grains made it possible to intensify the germination process and to shorten the time of obtaining wheat sprouts "Altayskaya 105" for 27 hours.
Simplify to survive: prescriptive layouts ensure profitable scaling to 32nm and beyond
NASA Astrophysics Data System (ADS)
Liebmann, Lars; Pileggi, Larry; Hibbeler, Jason; Rovner, Vyacheslav; Jhaveri, Tejas; Northrop, Greg
2009-03-01
The time-to-market driven need to maintain concurrent process-design co-development, even in spite of discontinuous patterning, process, and device innovation is reiterated. The escalating design rule complexity resulting from increasing layout sensitivities in physical and electrical yield and the resulting risk to profitable technology scaling is reviewed. Shortcomings in traditional Design for Manufacturability (DfM) solutions are identified and contrasted to the highly successful integrated design-technology co-optimization used for SRAM and other memory arrays. The feasibility of extending memory-style design-technology co-optimization, based on a highly simplified layout environment, to logic chips is demonstrated. Layout density benefits, modeled patterning and electrical yield improvements, as well as substantially improved layout simplicity are quantified in a conventional versus template-based design comparison on a 65nm IBM PowerPC 405 microprocessor core. The adaptability of this highly regularized template-based design solution to different yield concerns and design styles is shown in the extension of this work to 32nm with an increased focus on interconnect redundancy. In closing, the work not covered in this paper, focused on the process side of the integrated process-design co-optimization, is introduced.
Basic Principles of Electrical Network Reliability Optimization in Liberalised Electricity Market
NASA Astrophysics Data System (ADS)
Oleinikova, I.; Krishans, Z.; Mutule, A.
2008-01-01
The authors propose to select long-term solutions to the reliability problems of electrical networks in the stage of development planning. The guide lines or basic principles of such optimization are: 1) its dynamical nature; 2) development sustainability; 3) integrated solution of the problems of network development and electricity supply reliability; 4) consideration of information uncertainty; 5) concurrent consideration of the network and generation development problems; 6) application of specialized information technologies; 7) definition of requirements for independent electricity producers. In the article, the major aspects of liberalized electricity market, its functions and tasks are reviewed, with emphasis placed on the optimization of electrical network development as a significant component of sustainable management of power systems.
Optimization education after project implementation: sharing "lessons learned" with staff.
Vaughn, Susan
2011-01-01
Implementations involving healthcare technology solutions focus on providing end-user education prior to the application going "live" in the organization. Benefits to postimplementation education for staff should be included when planning these projects. This author describes the traditional training provided during the implementation of a bar-coding medication project and then the optimization training 8 weeks later.
A New Model for a Carpool Matching Service.
Xia, Jizhe; Curtin, Kevin M; Li, Weihong; Zhao, Yonglong
2015-01-01
Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined.
A New Model for a Carpool Matching Service
Xia, Jizhe; Curtin, Kevin M.; Li, Weihong; Zhao, Yonglong
2015-01-01
Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined. PMID:26125552
Nonreciprocal signal routing in an active quantum network
NASA Astrophysics Data System (ADS)
Metelmann, A.; Türeci, H. E.
2018-04-01
As superconductor quantum technologies are moving towards large-scale integrated circuits, a robust and flexible approach to routing photons at the quantum level becomes a critical problem. Active circuits, which contain parametrically driven elements selectively embedded in the circuit, offer a viable solution. Here, we present a general strategy for routing nonreciprocally quantum signals between two sites of a given lattice of oscillators, implementable with existing superconducting circuit components. Our approach makes use of a dual lattice of overdamped oscillators linking the nodes of the main lattice. Solutions for spatially selective driving of the lattice elements can be found, which optimally balance coherent and dissipative hopping of microwave photons to nonreciprocally route signals between two given nodes. In certain lattices these optimal solutions are obtained at the exceptional point of the dynamical matrix of the network. We also demonstrate that signal and noise transmission characteristics can be separately optimized.
NASA Astrophysics Data System (ADS)
Ivanov, V. V.; Popov, S. I.; Kirichek, A. V.
2018-03-01
The article suggests the technology of vibration finishing processing of aluminum alloys with simultaneous coating. On the basis of experimental studies, cast alloys, working media, operating modes of equipment, activating solutions were chosen. The practical application of the developed technology on real parts is shown.
OPUS: Optimal Projection for Uncertain Systems. Volume 1
1991-09-01
unifiedI control- design methodology that directly addresses these technology issues. 1 In particular, optimal projection theory addresses the need for...effects, and limited identification accuracy in a 1-g environment. The principal contribution of OPUS is a unified design methodology that...characterizing solutions to constrained control- design problems. Transforming OPUS into a practi- cal design methodology requires the development of
2011-12-01
image) ................. 114 Figure 156 – Abaqus thermal model attempting to characterize the thermal profile seen in the test data...optimization process ... 118 Figure 159 – Thermal profile for optimized Abaqus thermal solution ....................................... 119 Figure 160 – LVDT...Coefficients of thermal expansion results ................................................................. 121 Table 12 – LVDT correlation results
NASA Astrophysics Data System (ADS)
Morioka, Yasuki; Nakata, Toshihiko
In order to design optimal biomass utilization system for rural area, OMNIBUS (The Optimization Model for Neo-Integrated Biomass Utilization System) has been developed. OMNIBUS can derive the optimal system configuration to meet different objective function, such as current account balance, amount of biomass energy supply, and CO2 emission. Most of biomass resources in a focused region e.g. wood biomass, livestock biomass, and crop residues are considered in the model. Conversion technologies considered are energy utilization technologies e.g. direct combustion and methane fermentation, and material utilization technologies e.g. composting and carbonization. Case study in Miyakojima, Okinawa prefecture, has been carried out for several objective functions and constraint conditions. Considering economics of the utilization system as a priority requirement, composting and combustion heat utilization are mainly chosen in the optimal system configuration. However gasification power plant and methane fermentation are included in optimal solutions, only when both biomass energy utilization and CO2 reduction have been set as higher priorities. External benefit of CO2 reduction has large impacts on the system configuration. Provided marginal external benefit of more than 50,000 JPY/t-C, external benefit becomes greater than the revenue from electricity and compost etc. Considering technological learning in the future, expensive technologies such as gasification power plant and methane fermentation will have economic feasibility as well as market competitiveness.
Rehabilitative and Assistive Technology: Overview
... to study how people with disabilities function in society. It includes studying barriers to optimal function and designing solutions so that people with disabilities can interact successfully in their environments. The NICHD houses the National Center for Medical ...
Technology, energy and the environment
NASA Astrophysics Data System (ADS)
Mitchell, Glenn Terry
This dissertation consists of three distinct papers concerned with technology, energy and the environment. The first paper is an empirical analysis of production under uncertainty, using agricultural production data from the central United States. Unlike previous work, this analysis identifies the effect of actual realizations of weather as well as farmers' expectations about weather. The results indicate that both of these are significant factors explaining short run profits in agriculture. Expectations about weather, called climate, affect production choices, and actual weather affects realized output. These results provide better understanding of the effect of climate change in agriculture. The second paper examines how emissions taxes induce innovation that reduces pollution. A polluting firm chooses technical improvement to minimize cost over an infinite horizon, given an emission tax set by a planner. This leads to a solution path for technical change. Changes in the tax rate affect the path for innovation. Setting the tax at equal to the marginal damage (which is optimal in a static setting with no technical change) is not optimal in the presence of technical change. When abatement is also available as an alternative to technical change, changes in the tax can have mixed effects, due to substitution effects. The third paper extends the theoretical framework for exploring the diffusion of new technologies. Information about new technologies spreads through the economy by means of a network. The pattern of diffusion will depend on the structure of this network. Observed networks are the result of an evolutionary process. This paper identifies how these evolutionary outcomes compare with optimal solutions. The conditions guaranteeing convergence to an optimal outcome are quite stringent. It is useful to determine the set of initial population states that do converge to an optimal outcome. The distribution of costs and benefits among the agents within an information processing structure plays a critical role in defining this set. These distributional arrangements represent alternative institutional regimes. Institutional changes can improve outcomes, free the flow of information, and encourage the diffusion of profitable new technologies.
Wang, Xiao-liang; Zhang, Yu-jie; Chen, Ming-xia; Wang, Ze-feng
2005-05-01
To optimize extraction technology of the seed of Ziziphus jujuba var. spinosa with the targets of the total saponin, total jujuboside A and B and total flavonoids. In the method of one-way and orthogonal tests, ethanol concentration, amount of ethanol, extraction time and extraction times were the factors in orthogonal test, and each factor with three levels. Ethanol concentration and extraction times had significant effect on all the targets, other factors should be selected in accordance with production practice. The best extraction technology is to extract for three times with 8 fold ethanol solution (60%), and 1.5 h each time.
Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)
2002-01-01
Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.
NASA Astrophysics Data System (ADS)
Mattii, Luca; Milojevic, Dragomir; Debacker, Peter; Berekovic, Mladen; Sherazi, Syed Muhammad Yasser; Chava, Bharani; Bardon, Marie Garcia; Schuddinck, Pieter; Rodopoulos, Dimitrios; Baert, Rogier; Gerousis, Vassilios; Ryckaert, Julien; Raghavan, Praveen
2018-01-01
Standard-cell design, technology choices, and place and route (P&R) efficiency are deeply interrelated in CMOS technology nodes below 10 nm, where lower number of tracks cells and higher pin densities pose increasingly challenging problems to the router in terms of congestion and pin accessibility. To evaluate and downselect the best solutions, a holistic design-technology co-optimization approach leveraging state-of-the-art P&R tools is thus necessary. We adopt such an approach using the imec N7 technology platform, with contacted poly pitch of 42 nm and tightest metal pitch of 32 nm, by comparing post P&R area of an IP block for different standard cell configurations, technology options, and cell height. Keeping the technology node and the set of ground rules unchanged, we demonstrate that a careful combination of these solutions can enable area gains of up to 50%, comparable with the area benefits of migrating to another node. We further demonstrate that these area benefits can be achieved at isoperformance with >20% reduced power. As at the end of the CMOS roadmap, conventional scaling enacted through pitch reduction is made more and more challenging by constraints imposed by lithography limits, material resistivity, manufacturability, and ultimately wafer cost, the approach shown herein offers a valid, attractive, and low-cost alternative.
Design technology co-optimization for 14/10nm metal1 double patterning layer
NASA Astrophysics Data System (ADS)
Duan, Yingli; Su, Xiaojing; Chen, Ying; Su, Yajuan; Shao, Feng; Zhang, Recco; Lei, Junjiang; Wei, Yayi
2016-03-01
Design and technology co-optimization (DTCO) can satisfy the needs of the design, generate robust design rule, and avoid unfriendly patterns at the early stage of design to ensure a high level of manufacturability of the product by the technical capability of the present process. The DTCO methodology in this paper includes design rule translation, layout analysis, model validation, hotspots classification and design rule optimization mainly. The correlation of the DTCO and double patterning (DPT) can optimize the related design rule and generate friendlier layout which meets the requirement of the 14/10nm technology node. The experiment demonstrates the methodology of DPT-compliant DTCO which is applied to a metal1 layer from the 14/10nm node. The DTCO workflow proposed in our job is an efficient solution for optimizing the design rules for 14/10 nm tech node Metal1 layer. And the paper also discussed and did the verification about how to tune the design rule of the U-shape and L-shape structures in a DPT-aware metal layer.
Study on Silicon Microstructure Processing Technology Based on Porous Silicon
NASA Astrophysics Data System (ADS)
Shang, Yingqi; Zhang, Linchao; Qi, Hong; Wu, Yalin; Zhang, Yan; Chen, Jing
2018-03-01
Aiming at the heterogeneity of micro - sealed cavity in silicon microstructure processing technology, the technique of preparing micro - sealed cavity of porous silicon is proposed. The effects of different solutions, different substrate doping concentrations, different current densities, and different etching times on the rate, porosity, thickness and morphology of the prepared porous silicon were studied. The porous silicon was prepared by different process parameters and the prepared porous silicon was tested and analyzed. For the test results, optimize the process parameters and experiments. The experimental results show that the porous silicon can be controlled by optimizing the parameters of the etching solution and the doping concentration of the substrate, and the preparation of porous silicon with different porosity can be realized by different doping concentration, so as to realize the preparation of silicon micro-sealed cavity, to solve the sensor sensitive micro-sealed cavity structure heterogeneous problem, greatly increasing the application of the sensor.
NASA Astrophysics Data System (ADS)
Huang, C.; Hsu, N.
2013-12-01
This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.
NASA Technical Reports Server (NTRS)
2000-01-01
Reality Capture Technologies, Inc. is a spinoff company from Ames Research Center. Offering e-business solutions for optimizing management, design and production processes, RCT uses visual collaboration environments (VCEs) such as those used to prepare the Mars Pathfinder mission.The product, 4-D Reality Framework, allows multiple users from different locations to manage and share data. The insurance industry is one targeted commercial application for this technology.
Novel approach to investigation of semiconductor MOCVD by microreactor technology
NASA Astrophysics Data System (ADS)
Konakov, S. A.; Krzhizhanovskaya, V. V.
2017-11-01
Metal-Organic Chemical Vapour Deposition is a very complex technology that requires further investigation and optimization. We propose to apply microreactors to (1) replace multiple expensive time-consuming macroscale experiments by just one microreactor deposition with many points on one substrate; (2) to derive chemical reaction rates from individual deposition profiles using theoretical analytical solution. In this paper we also present the analytical solution of a simplified equation describing the deposition rate dependency on temperature. It allows to solve an inverse problem and to obtain detailed information about chemical reaction mechanism of MOCVD process.
On-Chip Hardware for Cell Monitoring: Contact Imaging and Notch Filtering
2005-07-07
a polymer carrier. Spectrophotometer chosen and purchased for testing optical filters and materials. Characterization and comparison of fabricated...reproducibility of behavior. Multi-level SU8 process developed. Optimization of actuator for closing vial lids and development of lid sealing technology is...bending angles characterized as a function of temperature in NaDBS solution. " Photopatternable polymers are a viable interim packaging solution; through
NASA Astrophysics Data System (ADS)
Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.
2010-04-01
Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.
Technology-design-manufacturing co-optimization for advanced mobile SoCs
NASA Astrophysics Data System (ADS)
Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey
2014-03-01
How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.
Investment in Green Technologies
NASA Astrophysics Data System (ADS)
Das Gupta, Supratim
Since the middle of the 1970's, there has been considerable research about how to deal with exhaustible natural resources which are essential to production. In the absence of substitution possibilities, the finite stock of these resources acts as a limiting factor to continued growth of output and hence consumption possibilities. In our first chapter, we combine a finite natural resource and human capital in the production function and look at the possibility of maintaining a non-declining or sustainable level of consumption for an infinite horizon. Our results show that the return to human capital accumulation plays a key role in ensuring this objective. In our model without physical capital, we obtain a similar result where this return must be such that the fraction of time devoted to acquiring human capital each period is at least as much as the share of natural resources in output. Our second chapter focuses on the transition from a relatively cheap exhaustible natural resource (coal, gasoline) to an expensive alternative technology assumed to be in nearly unlimited supply (wind, solar). Due to significant cost differences between fossil-fuel based energy sources and these alternative (backstop) technologies, their use is not as widespread. Public subsidies to research can however bring about innovation through cheaper production techniques which would significantly reduce the operating costs of these backstop technologies. But without sufficient incentives for investment and patent protections, individual firms typically underinvest in backstop technologies relative to the socially optimal level. In our paper, we find that this underinvestment in the backstop also leads to an under-extraction of the exhaustible natural resource. This imply firms would conserve the natural resource for too long and switch later to the alternative technology relative to the socially optimal solution. We extend the chapter to include pollution as a flow variable. Pollution from aggregate use of the natural resource is seen to not affect the behavior of an individual firm whereas it significantly affects that of the social planner. For relatively low pollution cost values, the socially optimal solution involves less investment in the backstop and conserving the natural resource for a longer period compared to the case without pollution. For higher values of the pollution cost, the social planner invests more in the backstop each period and switches sooner to the backstop compared to the case without pollution. In some situations, this may involve leaving behind some stock of the natural resource in the ground. The third chapter introduces pollution (a stock variable) through a deterioration of environmental quality. The structure of the second chapter is maintained here. Comparing the true pollution cost of the resource (in terms of a poor environmental quality) and the cost of the backstop technology, it is possible for the natural resource to be relatively more expensive. This arises in a situation of a very dirty environmental quality where the additional benefit from a slightly better environment exceeds the cost of the alternative cleaner technology. In this case, the optimal solution involves using the backstop at first for a few periods before making a discrete jump to a constant mix of using both the resource and the backstop technology. Here the economy settles at a steady state of environmental quality. It similarly follows that when the quality of the environment is relatively clean to begin with, the optimal solution involves starting with the cheaper but polluting natural resource before switching to a constant mix of using both the energy sources.
The promise of macromolecular crystallization in microfluidic chips
NASA Technical Reports Server (NTRS)
van der Woerd, Mark; Ferree, Darren; Pusey, Marc
2003-01-01
Microfluidics, or lab-on-a-chip technology, is proving to be a powerful, rapid, and efficient approach to a wide variety of bioanalytical and microscale biopreparative needs. The low materials consumption, combined with the potential for packing a large number of experiments in a few cubic centimeters, makes it an attractive technique for both initial screening and subsequent optimization of macromolecular crystallization conditions. Screening operations, which require a macromolecule solution with a standard set of premixed solutions, are relatively straightforward and have been successfully demonstrated in a microfluidics platform. Optimization methods, in which crystallization solutions are independently formulated from a range of stock solutions, are considerably more complex and have yet to be demonstrated. To be competitive with either approach, a microfluidics system must offer ease of operation, be able to maintain a sealed environment over several weeks to months, and give ready access for the observation and harvesting of crystals as they are grown.
NASA Astrophysics Data System (ADS)
Wu, Dongjun
Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.
Advanced Metalworking Solutions for Naval Systems That Go In Harm’s Way
2013-01-01
quarter century of projects, including early research and development of technologies such as semi-solid metalworking; powder metallurgy; and process...modeling and simulation. More recent projects have focused on friction stir welding, hybrid laser -arc welded metallic sandwich panels, and improved...Metalworking Center has optimized a wide variety of manufacturing technologies throughout its 25-year history, including powder metallurgy processing, semi
DOT National Transportation Integrated Search
2016-08-01
There is optimism that Automated Vehicles (AVs) can improve the safety of the transportation system, : reduce congestion, increase reliability, offer improved mobility solutions to all segments of the population : including the transportation-disadva...
Kelly, Andrew John; Fausset, Cara Bailey; Rogers, Wendy; Fisk, Arthur D
2014-12-01
This study examined potential issues faced by older adults in managing their homes and their proposed solutions for overcoming hypothetical difficulties. Forty-four diverse, independently living older adults (66-85) participated in structured group interviews in which they discussed potential solutions to manage difficulties presented in four scenarios: perceptual, mobility, physical, and cognitive difficulties. The proposed solutions were classified using the Selection, Optimization, and Compensation (SOC) model. Participants indicated they would continue performing most tasks and reported a range of strategies to manage home maintenance challenges. Most participants reported that they would manage home maintenance challenges using compensation; the most frequently mentioned compensation strategy was using tools and technologies. There were also differences across the scenarios: Optimization was discussed most frequently with perceptual and cognitive difficulty scenarios. These results provide insights into supporting older adults' potential needs for aging-in-place and provide evidence of the value of the SOC model in applied research. © The Author(s) 2012.
NASA Astrophysics Data System (ADS)
Alemany, Kristina
Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables---launch date, time of flight, and asteroid stay times (when applicable)---as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.
Method for aqueous gold thiosulfate extraction using copper-cyanide pretreated carbon adsorption
Young, Courtney; Melashvili, Mariam; Gow, Nicholas V
2013-08-06
A gold thiosulfate leaching process uses carbon to remove gold from the leach liquor. The activated carbon is pretreated with copper cyanide. A copper (on the carbon) to gold (in solution) ratio of at least 1.5 optimizes gold recovery from solution. To recover the gold from the carbon, conventional elution technology works but is dependent on the copper to gold ratio on the carbon.
Incentives to create and sustain healthy behaviors: technology solutions and research needs.
Teyhen, Deydre S; Aldag, Matt; Centola, Damon; Edinborough, Elton; Ghannadian, Jason D; Haught, Andrea; Jackson, Theresa; Kinn, Julie; Kunkler, Kevin J; Levine, Betty; Martindale, Valerie E; Neal, David; Snyder, Leslie B; Styn, Mindi A; Thorndike, Frances; Trabosh, Valerie; Parramore, David J
2014-12-01
Health-related technology, its relevance, and its availability are rapidly evolving. Technology offers great potential to minimize and/or mitigate barriers associated with achieving optimal health, performance, and readiness. In support of the U.S. Army Surgeon General's vision for a "System for Health" and its Performance Triad initiative, the U.S. Army Telemedicine and Advanced Technology Research Center hosted a workshop in April 2013 titled "Incentives to Create and Sustain Change for Health." Members of government and academia participated to identify and define the opportunities, gain clarity in leading practices and research gaps, and articulate the characteristics of future technology solutions to create and sustain real change in the health of individuals, the Army, and the nation. The key factors discussed included (1) public health messaging, (2) changing health habits and the environmental influence on health, (3) goal setting and tracking, (4) the role of incentives in behavior change intervention, and (5) the role of peer and social networks in change. This report summarizes the recommendations on how technology solutions could be employed to leverage evidence-based best practices and identifies gaps in research where further investigation is needed. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Method of Optimizing the Construction of Machining, Assembly and Control Devices
NASA Astrophysics Data System (ADS)
Iordache, D. M.; Costea, A.; Niţu, E. L.; Rizea, A. D.; Babă, A.
2017-10-01
Industry dynamics, driven by economic and social requirements, must generate more interest in technological optimization, capable of ensuring a steady development of advanced technical means to equip machining processes. For these reasons, the development of tools, devices, work equipment and control, as well as the modernization of machine tools, is the certain solution to modernize production systems that require considerable time and effort. This type of approach is also related to our theoretical, experimental and industrial applications of recent years, presented in this paper, which have as main objectives the elaboration and use of mathematical models, new calculation methods, optimization algorithms, new processing and control methods, as well as some structures for the construction and configuration of technological equipment with a high level of performance and substantially reduced costs..
RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)
2001-01-01
Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the class of applications with moderate input-output data rates but large intermediate multi-thread data streams has been addressed and mitigated. This opens a new class of satellite image processing applications for bottleneck problems solution using RC technologies. The issue of a science algorithm level of abstraction necessary for RC hardware implementation is also described. Selected Matlab functions already implemented in hardware were investigated for their direct applicability to the GOES-8 application with the intent to create a library of Matlab and IDL RC functions for ongoing work. A complete class of spacecraft image processing applications using embedded re-configurable computing technology to meet real-time requirements, including performance results and comparison with the existing system, is described in this paper.
NASA Astrophysics Data System (ADS)
Becerra Lopez, Humberto Ruben
2007-12-01
High expansion of power demand is expected in the Upper Rio Grande region (El Paso, Hudspeth, Culberson, Jeff Davis, Presidio and Brewster counties) as a result of both electrical demand growth and decommissioning of installed capacity. On the supply side a notable deployment of renewable power technologies can be projected owing to the recent introduction of a new energy policy in Texas, which attempts to reach 10,000 installed-MWe of renewable capacity for 2025. Power generation fueled by natural-gas might consistently expand due to the encouraged use of this fuel. In this context the array of participating technologies can be optimized, which, within a sustainability framework, translates into a multidimensional problem. The solution to the problem is presented through this dissertation in two main parts. The first part solves the thermodynamic-environmental problem through developing a dynamic model to project maximum allowable expansion of technologies. Predetermined alternatives include diverse renewable energy technologies (wind turbine, photovoltaic conversion, hybrid solar thermal parabolic trough, and solid oxide fuel cells), a conventional fossil-fuel technology (natural gas combined-cycle), and a breakthrough fossil-fuel technology (solid oxide fuel cells). The analysis is based on the concept of cumulative exergy consumption, expanded to include abatement of emissions. A Gompertz sigmoid growth is assumed and constrained by both exergetic self-sustenance and regional energy resource availability. This part of the analysis assumes that power demand expansion is met by full deployment of alternative technologies backed up by conventional technology. Results show that through a proper allowance for exergy reinvestment the power demand expansion may be met largely by alternative technologies minimizing the primary resource depletion. The second part of the study makes use of the dynamic model to support a multi-objective optimization routine, where the exergetic and economic costs are established as primary competing factors. An optimization algorithm is implemented using the constraint method. The solution is given as Pareto optimality with arrays for minimum cost and possible arrays for the tradeoff front. These arrays are further analyzed in terms of sustainability, cumulative exergy loss (i.e. irreversibilities and waste exergy) and incremental economic cost, and the results are compared with the goals of current legislated energy policy.
Favre, Leonardo Cristian; Dos Santos, Cristina; López-Fernández, María Paula; Mazzobre, María Florencia; Buera, María Del Pilar
2018-11-01
Thyme (Thymus vulgaris) has been demonstrated to extend the shelf-life of food products, being also a potential source of bioactive compounds. The aim of this research was to optimize the ultrasound assisted extraction employing β-cyclodextrin aqueous solutions as no-contaminant technology and Response Surface Methodology to obtain thyme extracts with the maximum antioxidant capacity. The optimal extraction conditions were: a solution of β-ciclodextrin 15 mM, an ultrasonic treatment time of 5.9 min at a temperature of 36.6 °C. They resulted in an extract with a polyphenolic content of 189.3 mg GAE/mL, an antioxidant activity (DPPH) of 14.8 mg GAE/mL, and ferric reducing/antioxidant power (FRAP) of 3.3 mg GAE/mL. Interestingly, the extract demonstrated to inhibit the production of Maillard browning products and can be considered a potential antiglycant agent. The obtained data is important for developing eco-friendly technologies in order to obtain natural antioxidant extracts with a potential inhibitory capacity of Maillard glycation reaction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Near-infrared surface-enhanced fluorescence using silver nanoparticles in solution
NASA Astrophysics Data System (ADS)
Furtaw, Michael D.
Fluorescence spectroscopy is a widely used detection technology in many research and clinical assays. Further improvement to assay sensitivity may enable earlier diagnosis of disease, novel biomarker discovery, and ultimately, improved outcomes of clinical care along with reduction in costs. Near-infrared, surface-enhanced fluorescence (NIR-SEF) is a promising approach to improve assay sensitivity via simultaneous increase in signal with a reduction in background. This dissertation describes research conducted with the overall goal to determine the extent to which fluorescence in solution may be enhanced by altering specific variables involved in the formation of plasmon-active nanostructures of dye-labeled protein and silver nanoparticles in solution, with the intent of providing a simple solution that may be readily adopted by current fluorescence users in the life science research community. First, it is shown that inner-filtering, re-absorption of the emitted photons, can red-shift the optimal fluorophore spectrum away from the resonant frequency of the plasmon-active nanostructure. It is also shown that, under certain conditions, the quality factor may be a better indicator of SEF than the commonly accepted overlap of the fluorophore spectrum with the plasmon resonance of the nanostructure. Next, it is determined that streptavidin is the best choice for carrier protein, among the most commonly used dye-labeled detection antibodies, to enable the largest fluorescence enhancement when labeled with IRDye 800CW and used in combination with silver nanoparticles in solution. It is shown that the relatively small and symmetric geometry of streptavidin enables substantial electromagnetic-field confinement when bound to silver nanoparticles, leading to strong and reproducible enhancement. The role of silver nanoparticle aggregation is demonstrated in a droplet-based microfluidic chip and further optimized in a standard microtiter-plate format. A NIR-SEF technology based on aggregation with optimized salt concentration demonstrates a fluorescence signal enhancement up to 2530-fold while improving the limit-of-detection over 1000-fold. Finally, the NIR-SEF technology is applied to demonstrate 42-fold improvement in sensitivity of the clinically-relevant biomarker, alpha-fetoprotein, along with a 16-fold improvement in limit-of-detection.
A new methodology for surcharge risk management in urban areas (case study: Gonbad-e-Kavus city).
Hooshyaripor, Farhad; Yazdi, Jafar
2017-02-01
This research presents a simulation-optimization model for urban flood mitigation integrating Non-dominated Sorting Genetic Algorithm (NSGA-II) with Storm Water Management Model (SWMM) hydraulic model under a curve number-based hydrologic model of low impact development technologies in Gonbad-e-Kavus, a small city in the north of Iran. In the developed model, the best performance of the system relies on the optimal layout and capacity of retention ponds over the study area in order to reduce surcharge from the manholes underlying a set of storm event loads, while the available investment plays a restricting role. Thus, there is a multi-objective optimization problem with two conflicting objectives solved successfully by NSGA-II to find a set of optimal solutions known as the Pareto front. In order to analyze the results, a new factor, investment priority index (IPI), is defined which shows the risk of surcharging over the network and priority of the mitigation actions. The IPI is calculated using the probability of pond selection for candidate locations and average depth of the ponds in all Pareto front solutions. The IPI can help the decision makers to arrange a long-term progressive plan with the priority of high-risk areas when an optimal solution has been selected.
NASA Astrophysics Data System (ADS)
Sun, Dongya; Gao, Yifan; Hou, Dianxun; Zuo, Kuichang; Chen, Xi; Liang, Peng; Zhang, Xiaoyuan; Ren, Zhiyong Jason; Huang, Xia
2018-04-01
Recovery of nutrient resources from the wastewater is now an inevitable strategy to maintain the supply of both nutrient and water for our huge population. While the intensive energy consumption in conventional nutrient recovery technologies still remained as the bottleneck towards the sustainable nutrient recycle. This study proposed an enlarged microbial nutrient recovery cell (EMNRC) which was powered by the energy contained in wastewater and achieved multi-cycle nutrient recovery incorporated with in situ wastewater treatment. With the optimal recovery solution of 3 g/L NaCl and the optimal volume ratio of wastewater to recovery solution of 10:1, >89% of phosphorus and >62% of ammonium nitrogen were recovered into struvite. An extremely low water input ratio of <1% was required to obtain the recovered fertilizer and the purified water. It was proved the EMNRC system was a promising technology which could utilize the chemical energy contained in wastewater itself and energy-neutrally recover nutrient during the continuous wastewater purification process.
NASA Astrophysics Data System (ADS)
Paek, Seung Weon; Kang, Jae Hyun; Ha, Naya; Kim, Byung-Moo; Jang, Dae-Hyun; Jeon, Junsu; Kim, DaeWook; Chung, Kun Young; Yu, Sung-eun; Park, Joo Hyun; Bae, SangMin; Song, DongSup; Noh, WooYoung; Kim, YoungDuck; Song, HyunSeok; Choi, HungBok; Kim, Kee Sup; Choi, Kyu-Myung; Choi, Woonhyuk; Jeon, JoongWon; Lee, JinWoo; Kim, Ki-Su; Park, SeongHo; Chung, No-Young; Lee, KangDuck; Hong, YoungKi; Kim, BongSeok
2012-03-01
A set of design for manufacturing (DFM) techniques have been developed and applied to 45nm, 32nm and 28nm logic process technologies. A noble technology combined a number of potential confliction of DFM techniques into a comprehensive solution. These techniques work in three phases for design optimization and one phase for silicon diagnostics. In the DFM prevention phase, foundation IP such as standard cells, IO, and memory and P&R tech file are optimized. In the DFM solution phase, which happens during ECO step, auto fixing of process weak patterns and advanced RC extraction are performed. In the DFM polishing phase, post-layout tuning is done to improve manufacturability. DFM analysis enables prioritization of random and systematic failures. The DFM technique presented in this paper has been silicon-proven with three successful tape-outs in Samsung 32nm processes; about 5% improvement in yield was achieved without any notable side effects. Visual inspection of silicon also confirmed the positive effect of the DFM techniques.
A Data-Driven Solution for Performance Improvement
NASA Technical Reports Server (NTRS)
2002-01-01
Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.
Real-time locating systems (RTLS) in healthcare: a condensed primer
2012-01-01
Real-time locating systems (RTLS, also known as real-time location systems) have become an important component of many existing ubiquitous location aware systems. While GPS (global positioning system) has been quite successful as an outdoor real-time locating solution, it fails to repeat this success indoors. A number of RTLS technologies have been used to solve indoor tracking problems. The ability to accurately track the location of assets and individuals indoors has many applications in healthcare. This paper provides a condensed primer of RTLS in healthcare, briefly covering the many options and technologies that are involved, as well as the various possible applications of RTLS in healthcare facilities and their potential benefits, including capital expenditure reduction and workflow and patient throughput improvements. The key to a successful RTLS deployment lies in picking the right RTLS option(s) and solution(s) for the application(s) or problem(s) at hand. Where this application-technology match has not been carefully thought of, any technology will be doomed to failure or to achieving less than optimal results. PMID:22741760
Real-time locating systems (RTLS) in healthcare: a condensed primer.
Kamel Boulos, Maged N; Berry, Geoff
2012-06-28
Real-time locating systems (RTLS, also known as real-time location systems) have become an important component of many existing ubiquitous location aware systems. While GPS (global positioning system) has been quite successful as an outdoor real-time locating solution, it fails to repeat this success indoors. A number of RTLS technologies have been used to solve indoor tracking problems. The ability to accurately track the location of assets and individuals indoors has many applications in healthcare. This paper provides a condensed primer of RTLS in healthcare, briefly covering the many options and technologies that are involved, as well as the various possible applications of RTLS in healthcare facilities and their potential benefits, including capital expenditure reduction and workflow and patient throughput improvements. The key to a successful RTLS deployment lies in picking the right RTLS option(s) and solution(s) for the application(s) or problem(s) at hand. Where this application-technology match has not been carefully thought of, any technology will be doomed to failure or to achieving less than optimal results.
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)
1998-01-01
This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence.
Baranauskaite, Juste; Ivanauskas, Liudas; Masteikova, Ruta; Kopustinskiene, Dalia; Baranauskas, Algirdas; Bernatoniene, Jurga
2017-09-01
The aim of this study was optimization of spray-drying process conditions for microencapsulation of Turkish oregano extract. Different concentrations of maltodextrin and gum arabic as encapsulating agents (wall material) as well as influence of selected processing variables were evaluated. The optimal conditions were maintained on the basis of the load of main bioactive compounds - ursolic, rosmarinic acids and carvacrol - in prepared microparticles after comparison of all significant response variables using desirability function. Physicomechanical properties of powders such as flowability, wettability, solubility, moisture content as well as product yield, encapsulation efficiency (EE), density, morphology and size distribution of prepared microparticles have been determined. The results demonstrated that the optimal conditions for spray-drying mixture consisted of two parts of wall material solution and one part of ethanolic oregano extract when the feed flow rate was 40 mL/min and air inlet temperature -170 °C. Optimal concentration of wall materials in solution was 20% while the ratio of maltodextrin and gum arabic was 8.74:1.26.
Optimization of parameters of special asynchronous electric drives
NASA Astrophysics Data System (ADS)
Karandey, V. Yu; Popov, B. K.; Popova, O. B.; Afanasyev, V. L.
2018-03-01
The article considers the solution of the problem of parameters optimization of special asynchronous electric drives. The solution of the problem will allow one to project and create special asynchronous electric drives for various industries. The created types of electric drives will have optimum mass-dimensional and power parameters. It will allow one to realize and fulfill the set characteristics of management of technological processes with optimum level of expenses of electric energy, time of completing the process or other set parameters. The received decision allows one not only to solve a certain optimizing problem, but also to construct dependences between the optimized parameters of special asynchronous electric drives, for example, with the change of power, current in a winding of the stator or rotor, induction in a gap or steel of magnetic conductors and other parameters. On the constructed dependences, it is possible to choose necessary optimum values of parameters of special asynchronous electric drives and their components without carrying out repeated calculations.
Novel technologies for decontamination of fresh and minimally processed fruits and vegetables
USDA-ARS?s Scientific Manuscript database
The complex challenges facing producers and processors of fresh produce require creative applications of conventional treatments and innovative approaches to develop entirely novel treatments. The varied nature of fresh and fresh-cut produce demands solutions that are adapted and optimized for each ...
High performance photovoltaic applications using solution-processed small molecules.
Chen, Yongsheng; Wan, Xiangjian; Long, Guankui
2013-11-19
Energy remains a critical issue for the survival and prosperity of humancivilization. Many experts believe that the eventual solution for sustainable energy is the use of direct solar energy as the main energy source. Among the options for renewable energy, photovoltaic technologies that harness solar energy offer a way to harness an unlimited resource and minimum environment impact in contrast with other alternatives such as water, nuclear, and wind energy. Currently, almost all commercial photovoltaic technologies use Si-based technology, which has a number of disadvantages including high cost, lack of flexibility, and the serious environmental impact of the Si industry. Other technologies, such as organic photovoltaic (OPV) cells, can overcome some of these issues. Today, polymer-based OPV (P-OPV) devices have achieved power conversion efficiencies (PCEs) that exceed 9%. Compared with P-OPV, small molecules based OPV (SM-OPV) offers further advantages, including a defined structure for more reproducible performance, higher mobility and open circuit voltage, and easier synthetic control that leads to more diversified structures. Therefore, while largely undeveloped, SM-OPV is an important emerging technology with performance comparable to P-OPV. In this Account, we summarize our recent results on solution-processed SM-OPV. We believe that solution processing is essential for taking full advantage of OPV technologies. Our work started with the synthesis of oligothiophene derivatives with an acceptor-donor-acceptor (A-D-A) structure. Both the backbone conjugation length and electron withdrawing terminal groups play an important role in the light absorption, energy levels and performance of the devices. Among those molecules, devices using a 7-thiophene-unit backbone and a 3-ethylrhodanine (RD) terminal unit produced a 6.1% PCE. With the optimized conjugation length and terminal unit, we borrowed from the results with P-OPV devices to optimize the backbone. Thus we selected BDT (benzo[1,2-b:4,5-b']dithiophene) and DTS (dithienosilole) to replace the central thiophene unit, leading to a PCE of 8.12%. In addition to our molecules, Bazan and co-workers have developed another excellent system using DTS as the core unit that has also achieved a PCE greater than 8%.
The Promise of Macromolecular Crystallization in Micro-fluidic Chips
NASA Technical Reports Server (NTRS)
vanderWoerd, Mark; Ferree, Darren; Pusey, Marc
2003-01-01
Micro-fluidics, or lab on a chip technology, is proving to be a powerful, rapid, and efficient approach to a wide variety of bio-analytical and microscale bio-preparative needs. The low materials consumption, combined with the potential for packing a large number of experiments in a few cubic centimeters, makes it an attractive technique for both initial screening and subsequent optimization of macromolecular crystallization conditions. Screening operations, which require equilibrating macromolecule solution with a standard set of premixed solutions, are relatively straightforward and have been successfully demonstrated in a micro-fluidics platform. More complex optimization methods, where crystallization solutions are independently formulated from a range of stock solutions, are considerably more complex and have yet to be demonstrated. To be competitive with either approach, a micro-fluidics system must offer ease of operation, be able to maintain a sealed environment over several weeks to months, and give ready access for the observation of crystals as they are grown.
Polymer-based composites for aerospace: An overview of IMAST results
NASA Astrophysics Data System (ADS)
Milella, Eva; Cammarano, Aniello
2016-05-01
This paper gives an overview of technological results, achieved by IMAST, the Technological Cluster on Engineering of Polymeric Composite Materials and Structures, in the completed Research Projects in the aerospace field. In this sector, the Cluster developed different solutions: lightweight multifunctional fiber-reinforced polymer composites for aeronautic structures, advanced manufacturing processes (for the optimization of energy consumption and waste reduction) and multifunctional components (e.g., thermal, electrical, acoustic and fire resistance).
Technologies to Increase PV Hosting Capacity in Distribution Feeders: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Mather, Barry; Gotseff, Peter
This paper studies the distributed photovoltaic (PV) hosting capacity in distribution feeders by using the stochastic analysis approach. Multiple scenario simulations are conducted to analyze several factors that affect PV hosting capacity, including the existence of voltage regulator, PV location, the power factor of PV inverter and Volt/VAR control. Based on the conclusions obtained from simulation results, three approaches are then proposed to increase distributed PV hosting capacity, which can be formulated as the optimization problem to obtain the optimal solution. All technologies investigated in this paper utilize only existing assets in the feeder and therefore are implementable for amore » low cost. Additionally, the tool developed for these studies is described.« less
Technologies to Increase PV Hosting Capacity in Distribution Feeders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Mather, Barry; Gotseff, Peter
This paper studies the distributed photovoltaic (PV) hosting capacity in distribution feeders by using the stochastic analysis approach. Multiple scenario simulations are conducted to analyze several factors that affect PV hosting capacity, including the existence of voltage regulator, PV location, the power factor of PV inverter and Volt/VAR control. Based on the conclusions obtained from simulation results, three approaches are then proposed to increase distributed PV hosting capacity, which can be formulated as the optimization problem to obtain the optimal solution. All technologies investigated in this paper utilize only existing assets in the feeder and therefore are implementable for amore » low cost. Additionally, the tool developed for these studies is described.« less
Optimisation multi-objectif des systemes energetiques
NASA Astrophysics Data System (ADS)
Dipama, Jean
The increasing demand of energy and the environmental concerns related to greenhouse gas emissions lead to more and more private or public utilities to turn to nuclear energy as an alternative for the future. Nuclear power plants are then called to experience large expansion in the coming years. Improved technologies will then be put in place to support the development of these plants. This thesis considers the optimization of the thermodynamic cycle of the secondary loop of Gentilly-2 nuclear power plant in terms of output power and thermal efficiency. In this thesis, investigations are carried out to determine the optimal operating conditions of steam power cycles by the judicious use of the combination of steam extraction at the different stages of the turbines. Whether it is the case of superheating or regeneration, we are confronted in all cases to an optimization problem involving two conflicting objectives, as increasing the efficiency imply the decrease of mechanical work and vice versa. Solving this kind of problem does not lead to unique solution, but to a set of solutions that are tradeoffs between the conflicting objectives. To search all of these solutions, called Pareto optimal solutions, the use of an appropriate optimization algorithm is required. Before starting the optimization of the secondary loop, we developed a thermodynamic model of the secondary loop which includes models for the main thermal components (e.g., turbine, moisture separator-superheater, condenser, feedwater heater and deaerator). This model is used to calculate the thermodynamic state of the steam and water at the different points of the installation. The thermodynamic model has been developed with Matlab and validated by comparing its predictions with the operating data provided by the engineers of the power plant. The optimizer developed in VBA (Visual Basic for Applications) uses an optimization algorithm based on the principle of genetic algorithms, a stochastic optimization method which is very robust and widely used to solve problems usually difficult to handle by traditional methods. Genetic algorithms (GAs) have been used in previous research and proved to be efficient in optimizing heat exchangers networks (HEN) (Dipama et al., 2008). So, HEN have been synthesized to recover the maximum heat in an industrial process. The optimization problem formulated in the context of this work consists of a single objective, namely the maximization of energy recovery. The optimization algorithm developed in this thesis extends the ability of GAs by taking into account several objectives simultaneously. This algorithm provides an innovation in the method of finding optimal solutions, by using a technique which consist of partitioning the solutions space in the form of parallel grids called "watching corridors". These corridors permit to specify areas (the observation corridors) in which the most promising feasible solutions are found and used to guide the search towards optimal solutions. A measure of the progress of the search is incorporated into the optimization algorithm to make it self-adaptive through the use of appropriate genetic operators at each stage of optimization process. The proposed method allows a fast convergence and ensure a diversity of solutions. Moreover, this method gives the algorithm the ability to overcome difficulties associated with optimizing problems with complex Pareto front landscapes (e.g., discontinuity, disjunction, etc.). The multi-objective optimization algorithm has been first validated using numerical test problems found in the literature as well as energy systems optimization problems. Finally, the proposed optimization algorithm has been applied for the optimization of the secondary loop of Gentilly-2 nuclear power plant, and a set of solutions have been found which permit to make the power plant operate in optimal conditions. (Abstract shortened by UMI.)
Hiraishi, Kunihiko
2014-01-01
One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766
Aircraft conceptual design - an adaptable parametric sizing methodology
NASA Astrophysics Data System (ADS)
Coleman, Gary John, Jr.
Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to meet current aerospace challenges. Overarching goal is to avoid the reoccurring situation of optimizing an already ill-fated solution.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127
Bernoulli substitution in the Ramsey model: Optimal trajectories under control constraints
NASA Astrophysics Data System (ADS)
Krasovskii, A. A.; Lebedev, P. D.; Tarasyev, A. M.
2017-05-01
We consider a neoclassical (economic) growth model. A nonlinear Ramsey equation, modeling capital dynamics, in the case of Cobb-Douglas production function is reduced to the linear differential equation via a Bernoulli substitution. This considerably facilitates the search for a solution to the optimal growth problem with logarithmic preferences. The study deals with solving the corresponding infinite horizon optimal control problem. We consider a vector field of the Hamiltonian system in the Pontryagin maximum principle, taking into account control constraints. We prove the existence of two alternative steady states, depending on the constraints. A proposed algorithm for constructing growth trajectories combines methods of open-loop control and closed-loop regulatory control. For some levels of constraints and initial conditions, a closed-form solution is obtained. We also demonstrate the impact of technological change on the economic equilibrium dynamics. Results are supported by computer calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Supinski, B.; Caliga, D.
2017-09-28
The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.
Li, Yufang; Zhao, Gang; Hossain, S M Chapal; Panhwar, Fazil; Sun, Wenyu; Kong, Fei; Zang, Chuanbao; Jiang, Zhendong
2017-06-01
Biobanking of organs by cryopreservation is an enabling technology for organ transplantation. Compared with the conventional slow freezing method, vitreous cryopreservation has been regarded to be a more promising approach for long-term storage of organs. The major challenges to vitrification are devitrification and recrystallization during the warming process, and high concentrations of cryoprotective agents (CPAs) induced metabolic and osmotic injuries. For a theoretical model based optimization of vitrification, thermal properties of CPA solutions are indispensable. In this study, the thermal conductivities of M22 and vitrification solution containing ethylene glycol and dimethyl sulfoxide (two commonly used vitrification solutions) were measured using a self-made microscaled hot probe with enameled copper wire at the temperature range of 77 K-300 K. The data obtained by this study will further enrich knowledge of the thermal properties for CPA solutions at low temperatures, as is of primary importance for optimization of vitrification.
NASA Astrophysics Data System (ADS)
Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun
2014-11-01
This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.
Technology Maturity for the Habitable-zone Exoplanet Imaging Mission (HabEx) Concept
NASA Astrophysics Data System (ADS)
Morgan, Rhonda; Warfield, Keith R.; Stahl, H. Philip; Mennesson, Bertrand; Nikzad, Shouleh; nissen, joel; Balasubramanian, Kunjithapatham; Krist, John; Mawet, Dimitri; Stapelfeldt, Karl; warwick, Steve
2018-01-01
HabEx Architecture A is a 4m unobscured telescope optimized for direct imaging and spectroscopy of potentially habitable exoplanets, and also enables a wide range of general astrophysics science. The exoplanet detection and characterization drives the enabling core technologies. A hybrid starlight suppression approach of a starshade and coronagraph diversifies technology maturation risk. In this poster we assess these exoplanet-driven technologies, including elements of coronagraphs, starshades, mirrors, jitter mitigation, wavefront control, and detectors. By utilizing high technology readiness solutions where feasible, and identifying required technology development that can begin early, HabEx will be well positioned for assessment by the community in 2020 Astrophysics Decadal Survey.
Spacecraft Attitude Maneuver Planning Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2004-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.
[Study on the extraction process and macroporous resin for purification of Timosaponin B II].
Liu, Yan-Ping; Ding, Yue; Zhang, Tong; Wang, Bing; Cai, Zhen-Zhen; Tao, Jian-Sheng
2013-06-01
To optimize the extraction process and macroporous resin for purification of Timosaponin B II from Anemarrhena asphodeloides. Orthogonal design L9 (34) was employed to optimize the circumfluence extraction conditions by taking the extraction yield of Timosaponin B II as index. The absorption-desorption characteristics of eight kinds of macroporous resins were evaluated, then the best resin was chosen to optimize the purification process conditions. The optimum extraction conditions were as follows: the herb was extracted for 2 times (2 hours each time) with 8.5-fold 50% ethanol at the first time and 6-fold 50% ethanol at the second time. HPD100 resin showed a good property for the absorption-desorption of Timosaponin B II. The optimum technological conditions of HPD100 resin were as follows:the solution concentration was 0.23 mg/mL, the amount of saturated adsorption at 4/5 body volumn (BV) resin, the HPD100 resin was washed with 3 BV water and 6 BV 20% ethanol solution to remove the impurity, then the Timosaponin B II was desorbed by 5 BV ethanol solution. The purity of Timosaponin B II was about 50%. The optimized extraction process and purification is stable, efficient and suitable for industrial production.
Enabling Controlling Complex Networks with Local Topological Information.
Li, Guoqi; Deng, Lei; Xiao, Gaoxi; Tang, Pei; Wen, Changyun; Hu, Wuhua; Pei, Jing; Shi, Luping; Stanley, H Eugene
2018-03-15
Complex networks characterize the nature of internal/external interactions in real-world systems including social, economic, biological, ecological, and technological networks. Two issues keep as obstacles to fulfilling control of large-scale networks: structural controllability which describes the ability to guide a dynamical system from any initial state to any desired final state in finite time, with a suitable choice of inputs; and optimal control, which is a typical control approach to minimize the cost for driving the network to a predefined state with a given number of control inputs. For large complex networks without global information of network topology, both problems remain essentially open. Here we combine graph theory and control theory for tackling the two problems in one go, using only local network topology information. For the structural controllability problem, a distributed local-game matching method is proposed, where every node plays a simple Bayesian game with local information and local interactions with adjacent nodes, ensuring a suboptimal solution at a linear complexity. Starring from any structural controllability solution, a minimizing longest control path method can efficiently reach a good solution for the optimal control in large networks. Our results provide solutions for distributed complex network control and demonstrate a way to link the structural controllability and optimal control together.
Some Ideas on the Microcomputer and the Information/Knowledge Workstation.
ERIC Educational Resources Information Center
Boon, J. A.; Pienaar, H.
1989-01-01
Identifies the optimal goal of knowledge workstations as the harmony of technology and human decision-making behaviors. Two types of decision-making processes are described and the application of each type to experimental and/or operational situations is discussed. Suggestions for technical solutions to machine-user interfaces are then offered.…
NASA Astrophysics Data System (ADS)
Bella, P.; Buček, P.; Ridzoň, M.; Mojžiš, M.; Parilák, L.'
2017-02-01
Production of multi-rifled seamless steel tubes is quite a new technology in Železiarne Podbrezová. Therefore, a lot of technological questions emerges (process technology, input feedstock dimensions, material flow during drawing, etc.) Pilot experiments to fine tune the process cost a lot of time and energy. For this, numerical simulation would be an alternative solution for achieving optimal parameters in production technology. This would reduce the number of experiments needed, lowering the overall costs of development. However, to claim the numerical results to be relevant it is necessary to verify them against the actual plant trials. Searching for optimal input feedstock dimension for drawing of multi-rifled tube with dimensions Ø28.6 mm × 6.3 mm is what makes the main topic of this paper. As a secondary task, effective position of the plug - die couple has been solved via numerical simulation. Comparing the calculated results with actual numbers from plant trials a good agreement was observed.
Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience
Carvalho, Monica; Lozano, Miguel A.; Ramos, José; Serra, Luis M.
2013-01-01
This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs. PMID:24453881
Synthesis of trigeneration systems: sensitivity analyses and resilience.
Carvalho, Monica; Lozano, Miguel A; Ramos, José; Serra, Luis M
2013-01-01
This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.
Pareto joint inversion of 2D magnetotelluric and gravity data
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13
NASA Astrophysics Data System (ADS)
Karl, Florian; Zink, Roland
2016-04-01
The transformation of the energy sector towards decentralized renewable energies (RE) requires also storage systems to ensure security of supply. The new "Power to Mobility" (PtM) technology is one potential solution to use electrical overproduction to produce methane for i.e. gas vehicles. Motivated by these fact, the paper presents a methodology for a GIS-based temporal modelling of the power grid, to optimize the site planning process for the new PtM-technology. The modelling approach is based on a combination of the software QuantumGIS for the geographical and topological energy supply structure and OpenDSS for the net modelling. For a case study (work in progress) of the city of Straubing (Lower Bavaria) the parameters of the model are quantified. The presentation will discuss the methodology as well as the first results with a view to the application on a regional scale.
NASA Astrophysics Data System (ADS)
Wang, Dengfeng; Cai, Kefang
2018-04-01
This article presents a hybrid method combining a modified non-dominated sorting genetic algorithm (MNSGA-II) with grey relational analysis (GRA) to improve the static-dynamic performance of a body-in-white (BIW). First, an implicit parametric model of the BIW was built using SFE-CONCEPT software, and then the validity of the implicit parametric model was verified by physical testing. Eight shape design variables were defined for BIW beam structures based on the implicit parametric technology. Subsequently, MNSGA-II was used to determine the optimal combination of the design parameters that can improve the bending stiffness, torsion stiffness and low-order natural frequencies of the BIW without considerable increase in the mass. A set of non-dominated solutions was then obtained in the multi-objective optimization design. Finally, the grey entropy theory and GRA were applied to rank all non-dominated solutions from best to worst to determine the best trade-off solution. The comparison between the GRA and the technique for order of preference by similarity to ideal solution (TOPSIS) illustrated the reliability and rationality of GRA. Moreover, the effectiveness of the hybrid method was verified by the optimal results such that the bending stiffness, torsion stiffness, first order bending and first order torsion natural frequency were improved by 5.46%, 9.30%, 7.32% and 5.73%, respectively, with the mass of the BIW increasing by 1.30%.
2015-12-01
actors. The study team canvassed commercial, academic, and research institutions for a possible surrogate solution and introduced wall based avatars ...barrel, tent, or market stand); other technologies such as wall-projected avatars are not used (instead role players are used). CACTF/MOUT must be...sites at each home station are fitted with mounts or docks to enable enhanced site technology (MILES-enabled wall- projected avatars , non-pyro effects
Advances and trends in computational structural mechanics
NASA Technical Reports Server (NTRS)
Noor, A. K.
1986-01-01
Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.
Parameters modelling of amaranth grain processing technology
NASA Astrophysics Data System (ADS)
Derkanosova, N. M.; Shelamova, S. A.; Ponomareva, I. N.; Shurshikova, G. V.; Vasilenko, O. A.
2018-03-01
The article presents a technique that allows calculating the structure of a multicomponent bakery mixture for the production of enriched products, taking into account the instability of nutrient content, and ensuring the fulfilment of technological requirements and, at the same time considering consumer preferences. The results of modelling and analysis of optimal solutions are given by the example of calculating the structure of a three-component mixture of wheat and rye flour with an enriching component, that is, whole-hulled amaranth flour applied to the technology of bread from a mixture of rye and wheat flour on a liquid leaven.
Topology optimization of a gas-turbine engine part
NASA Astrophysics Data System (ADS)
Faskhutdinov, R. N.; Dubrovskaya, A. S.; Dongauzer, K. A.; Maksimov, P. V.; Trufanov, N. A.
2017-02-01
One of the key goals of aerospace industry is a reduction of the gas turbine engine weight. The solution of this task consists in the design of gas turbine engine components with reduced weight retaining their functional capabilities. Topology optimization of the part geometry leads to an efficient weight reduction. A complex geometry can be achieved in a single operation with the Selective Laser Melting technology. It should be noted that the complexity of structural features design does not affect the product cost in this case. Let us consider a step-by-step procedure of topology optimization by an example of a gas turbine engine part.
Industrial Photogrammetry - Accepted Metrology Tool or Exotic Niche
NASA Astrophysics Data System (ADS)
Bösemann, Werner
2016-06-01
New production technologies like 3D printing and other adaptive manufacturing technologies have changed the industrial manufacturing process, often referred to as next industrial revolution or short industry 4.0. Such Cyber Physical Production Systems combine virtual and real world through digitization, model building process simulation and optimization. It is commonly understood that measurement technologies are the key to combine the real and virtual worlds (eg. [Schmitt 2014]). This change from measurement as a quality control tool to a fully integrated step in the production process has also changed the requirements for 3D metrology solutions. Key words like MAA (Measurement Assisted Assembly) illustrate that new position of metrology in the industrial production process. At the same time it is obvious that these processes not only require more measurements but also systems to deliver the required information in high density in a short time. Here optical solutions including photogrammetry for 3D measurements have big advantages over traditional mechanical CMM's. The paper describes the relevance of different photogrammetric solutions including state of the art, industry requirements and application examples.
Optimizing Teleportation Cost in Distributed Quantum Circuits
NASA Astrophysics Data System (ADS)
Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh
2018-03-01
The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.
Optimization of wireless Bluetooth sensor systems.
Lonnblad, J; Castano, J; Ekstrom, M; Linden, M; Backlund, Y
2004-01-01
Within this study, three different Bluetooth sensor systems, replacing cables for transmission of biomedical sensor data, have been designed and evaluated. The three sensor architectures are built on 1-, 2- and 3-chip solutions and depending on the monitoring situation and signal character, different solutions are optimal. Essential parameters for all systems have been low physical weight and small size, resistance to interference and interoperability with other technologies as global- or local networks, PC's and mobile phones. Two different biomedical input signals, ECG and PPG (photoplethysmography), have been used to evaluate the three solutions. The study shows that it is possibly to continuously transmit an analogue signal. At low sampling rates and slowly varying parameters, as monitoring the heart rate with PPG, the 1-chip solution is the most suitable, offering low power consumption and thus a longer battery lifetime or a smaller battery, minimizing the weight of the sensor system. On the other hand, when a higher sampling rate is required, as an ECG, the 3-chip architecture, with a FPGA or micro-controller, offers the best solution and performance. Our conclusion is that Bluetooth might be useful in replacing cables of medical monitoring systems.
Yang, Gang; Zhao, Yaping; Zhang, Yongtai; Dang, Beilei; Liu, Ying; Feng, Nianping
2015-01-01
The aim of this investigation was to develop a procedure to improve the dissolution and bioavailability of silymarin (SM) by using bile salt-containing liposomes that were prepared by supercritical fluid technology (ie, solution-enhanced dispersion by supercritical fluids [SEDS]). The process for the preparation of SM-loaded liposomes containing a bile salt (SM-Lip-SEDS) was optimized using a central composite design of response surface methodology with the ratio of SM to phospholipids (w/w), flow rate of solution (mL/min), and pressure (MPa) as independent variables. Particle size, entrapment efficiency (EE), and drug loading (DL) were dependent variables for optimization of the process and formulation variables. The particle size, zeta potential, EE, and DL of the optimized SM-Lip-SEDS were 160.5 nm, −62.3 mV, 91.4%, and 4.73%, respectively. Two other methods to produce SM liposomes were compared to the SEDS method. The liposomes obtained by the SEDS method exhibited the highest EE and DL, smallest particle size, and best stability compared to liposomes produced by the thin-film dispersion and reversed-phase evaporation methods. Compared to the SM powder, SM-Lip-SEDS showed increased in vitro drug release. The in vivo AUC0−t of SM-Lip-SEDS was 4.8-fold higher than that of the SM powder. These results illustrate that liposomes containing a bile salt can be used to enhance the oral bioavailability of SM and that supercritical fluid technology is suitable for the preparation of liposomes. PMID:26543366
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gingerich, Daniel B; Bartholomew, Timothy V; Mauter, Meagan S
With the Environmental Protection Agency’s recent Effluent Limitation Guidelines for Steam Electric Generators, power plants are having to install and operate new wastewater technologies. Many plants are evaluating desalination technologies as possible compliance options. However, the desalination technologies under review that can reduce wastewater volume or treat to a zero-liquid discharges standard have a significant energy penalty to the plant. Waste heat, available from the exhaust gas or cooling water from coal-fired power plants, offers an opportunity to drive wastewater treatment using thermal desalination technologies. One such technology is forward osmosis (FO). Forward osmosis utilizes an osmotic pressure gradient tomore » passively pull water from a saline or wastewater stream across a semi-permeable membrane and into a more concentrated draw solution. This diluted draw solution is then fed into a distillation column, where the addition of low temperature waste heat can drive the separation to produce a reconcentrated draw solution and treated water for internal plant reuse. The use of low-temperature waste heat decouples water treatment from electricity production and eliminates the link between reducing water pollution and increasing air emissions from auxiliary electricity generation. In order to evaluate the feasibility of waste heat driven FO, we first build a model of an FO system for flue gas desulfurization (FGD) wastewater treatment at coal-fired power plants. This model includes the FO membrane module, the distillation column for draw solution recovery, and waste heat recovery from the exhaust gas. We then add a costing model to account for capital and operating costs of the forward osmosis system. We use this techno-economic model to optimize waste heat driven FO for the treatment of FGD wastewater. We apply this model to three case studies: the National Energy Technology Laboratory (NETL) 550 MW model coal fired power plant without carbon capture and sequestration, the NETL 550 MW model coal fired power plant with carbon capture and sequestration, and Plant Bowen in Eularhee, Georgia. For each case, we identify the design that minimizes the cost of wastewater treatment given the safely recoverable waste heat. We benchmark the cost minimum waste-heat forward osmosis solutions to two conventional options that rely on electricity, reverse osmosis and mechanical vapor recompression. Furthermore, we quantify the environmental damages from the emissions of carbon dioxide and criteria air pollutants for each treatment option. With this information we can assess the trade-offs between treatment costs, energy consumption, and air emissions between the treatment options.« less
SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Wang, J
2016-06-15
Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less
Research on NGN network control technology
NASA Astrophysics Data System (ADS)
Li, WenYao; Zhou, Fang; Wu, JianXue; Li, ZhiGuang
2004-04-01
Nowadays NGN (Next Generation Network) is the hotspot for discussion and research in IT section. The NGN core technology is the network control technology. The key goal of NGN is to realize the network convergence and evolution. Referring to overlay network model core on Softswitch technology, circuit switch network and IP network convergence realized. Referring to the optical transmission network core on ASTN/ASON, service layer (i.e. IP layer) and optical transmission convergence realized. Together with the distributing feature of NGN network control technology, on NGN platform, overview of combining Softswitch and ASTN/ASON control technology, the solution whether IP should be the NGN core carrier platform attracts general attention, and this is also a QoS problem on NGN end to end. This solution produces the significant practical meaning on equipment development, network deployment, network design and optimization, especially on realizing present network smooth evolving to the NGN. This is why this paper puts forward the research topic on the NGN network control technology. This paper introduces basics on NGN network control technology, then proposes NGN network control reference model, at the same time describes a realizable network structure of NGN. Based on above, from the view of function realization, NGN network control technology is discussed and its work mechanism is analyzed.
Human body motion tracking based on quantum-inspired immune cloning algorithm
NASA Astrophysics Data System (ADS)
Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing
2009-10-01
In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.
A multi-criteria evaluation of high efficiency clothes dryers: Gas and electric
DOE Office of Scientific and Technical Information (OSTI.GOV)
deMonsabert, S.; LaFrance, P.M.
1999-11-01
The results of an in-depth analysis to address the possible solutions to save energy and mitigate environmental damage caused by clothes dryers are presented in this paper. The analysis includes an environmental evaluation of gas and electric dryers. Various dryer technologies such as microwave, heat pump, heat recovery, and other designs are analyzed. Highly efficient clothes washers with increased moisture extraction that may reduce dryer impacts are also included within the analysis. The analysis includes the development of a multi-objective decision model that is solved for the short- and long-term to provide optimal courses of action. The results of themore » analysis revealed that fuel switching from electricity to natural gas was the optimal short-term solution. This measure could save a projected 2.5 MMT of carbon emissions annually by the year 2010. The optimal long-term alternative was not clear. The results showed that the option to research and develop a new high efficiency dryer was marginally better than fuel switching.« less
NASA Technical Reports Server (NTRS)
Rasmussen, John
1990-01-01
Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are available for the solution of single problems. By implementing collections of the available techniques into general software systems, operational environments for structural optimization have been created. The forthcoming years must bring solutions to the problem of integrating such systems into more general design environments. The result of this work should be CAD systems for rational design in which structural optimization is one important design tool among many others.
Optical solutions for unbundled access network
NASA Astrophysics Data System (ADS)
Bacîş Vasile, Irina Bristena
2015-02-01
The unbundling technique requires finding solutions to guarantee the economic and technical performances imposed by the nature of the services that can be offered. One of the possible solutions is the optic one; choosing this solution is justified for the following reasons: it optimizes the use of the access network, which is the most expensive part of a network (about 50% of the total investment in telecommunications networks) while also being the least used (telephone traffic on the lines has a low cost); it increases the distance between the master station/central and the terminal of the subscriber; the development of the services offered to the subscribers is conditioned by the subscriber network. For broadband services there is a need for support for the introduction of high-speed transport. A proper identification of the factors that must be satisfied and a comprehensive financial evaluation of all resources involved, both the resources that are in the process of being bought as well as extensions are the main conditions that would lead to a correct choice. As there is no single optimal technology for all development scenarios, which can take into account all access systems, a successful implementation is always done by individual/particularized scenarios. The method used today for the selection of an optimal solution is based on statistics and analysis of the various, already implemented, solutions, and on the experience that was already gained; the main evaluation criterion and the most unbiased one is the ratio between the cost of the investment and the quality of service, while serving an as large as possible number of customers.
SAIL--stereo-array isotope labeling.
Kainosho, Masatsune; Güntert, Peter
2009-11-01
Optimal stereospecific and regiospecific labeling of proteins with stable isotopes enhances the nuclear magnetic resonance (NMR) method for the determination of the three-dimensional protein structures in solution. Stereo-array isotope labeling (SAIL) offers sharpened lines, spectral simplification without loss of information and the ability to rapidly collect and automatically evaluate the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as before. This review gives an overview of stable isotope labeling methods for NMR spectroscopy with proteins and provides an in-depth treatment of the SAIL technology.
Highly reflective polymeric substrates functionalized utilizing atomic layer deposition
NASA Astrophysics Data System (ADS)
Zuzuarregui, Ana; Coto, Borja; Rodríguez, Jorge; Gregorczyk, Keith E.; Ruiz de Gopegui, Unai; Barriga, Javier; Knez, Mato
2015-08-01
Reflective surfaces are one of the key elements of solar plants to concentrate energy in the receivers of solar thermal electricity plants. Polymeric substrates are being considered as an alternative to the widely used glass mirrors due to their intrinsic and processing advantages, but optimizing both the reflectance and the physical stability of polymeric mirrors still poses technological difficulties. In this work, polymeric surfaces have been functionalized with ceramic thin-films by atomic layer deposition. The characterization and optimization of the parameters involved in the process resulted in surfaces with a reflection index of 97%, turning polymers into a real alternative to glass substrates. The solution we present here can be easily applied in further technological areas where seemingly incompatible combinations of polymeric substrates and ceramic coatings occur.
Xiao, Dong; Peng, Su-Ping; Wang, En-Yuan
2015-01-01
Microbially enhanced coalbed methane technology must be used to increase the methane content in mining and generate secondary biogenic gas. In this technology, the metabolic processes of methanogenic consortia are the basis for the production of biomethane from some of the organic compounds in coal. Thus, culture nutrition plays an important role in remediating the nutritional deficiency of a coal seam. To enhance the methane production rates for microorganism consortia, different types of nutrition solutions were examined in this study. Emulsion nutrition solutions containing a novel nutritional supplement, called dystrophy optional modification latex, increased the methane yield for methanogenic consortia. This new nutritional supplement can help methanogenic consortia form an enhanced anaerobic environment, optimize the microbial balance in the consortia, and improve the methane biosynthesis rate. PMID:25884952
NASA Technical Reports Server (NTRS)
Longwell, J. P.; Grobman, J.
1978-01-01
In connection with the anticipated impossibility to provide on a long-term basis liquid fuels derived from petroleum, an investigation has been conducted with the objective to assess the suitability of jet fuels made from oil shale and coal and to develop a data base which will allow optimization of future fuel characteristics, taking energy efficiency of manufacture and the tradeoffs in aircraft and engine design into account. The properties of future aviation fuels are examined and proposed solutions to problems of alternative fuels are discussed. Attention is given to the refining of jet fuel to current specifications, the control of fuel thermal stability, and combustor technology for use of broad specification fuels. The first solution is to continue to develop the necessary technology at the refinery to produce specification jet fuels regardless of the crude source.
NASA Astrophysics Data System (ADS)
Guo, Bing; Zhang, Yu; Documet, Jorge; Liu, Brent; Lee, Jasper; Shrestha, Rasu; Wang, Kevin; Huang, H. K.
2007-03-01
As clinical imaging and informatics systems continue to integrate the healthcare enterprise, the need to prevent patient mis-identification and unauthorized access to clinical data becomes more apparent especially under the Health Insurance Portability and Accountability Act (HIPAA) mandate. Last year, we presented a system to track and verify patients and staff within a clinical environment. This year, we further address the biometric verification component in order to determine which Biometric system is the optimal solution for given applications in the complex clinical environment. We install two biometric identification systems including fingerprint and facial recognition systems at an outpatient imaging facility, Healthcare Consultation Center II (HCCII). We evaluated each solution and documented the advantages and pitfalls of each biometric technology in this clinical environment.
NASA Astrophysics Data System (ADS)
Rousis, Damon A.
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J
2013-04-16
In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging.
NASA Astrophysics Data System (ADS)
Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu
2017-09-01
In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.
SENER molten salt tower technology. Ouarzazate NOOR III case
NASA Astrophysics Data System (ADS)
Relloso, Sergio; Gutiérrez, Yolanda
2017-06-01
NOOR III 150 MWe project is the evolution of Gemasolar (19.9 MWe) to large scale Molten Salt Tower plants. With more than 5 years of operational experience, Gemasolar lessons learned have been the starting point for the optimization of this technology, considered the leader of potential cost reduction in CSP. In addition, prototypes of plant key components (heliostat and receiver) were manufactured and thoroughly tested before project launch in order to prove the new engineering solutions adopted. The SENER proprietary technology of NOOR III will be applied in the next Molten Salt Tower plants that will follow in other countries, such as South Africa, Chile and Australia.
Liu, Xiaoqian; Tong, Yan; Wang, Jinyu; Wang, Ruizhen; Zhang, Yanxia; Wang, Zhimin
2011-11-01
Fufang Kushen injection was selected as the model drug, to optimize its alcohol-purification process and understand the characteristics of particle sedimentation process, and to investigate the feasibility of using process analytical technology (PAT) on traditional Chinese medicine (TCM) manufacturing. Total alkaloids (calculated by matrine, oxymatrine, sophoridine and oxysophoridine) and macrozamin were selected as quality evaluation markers to optimize the process of Fufang Kushen injection purification with alcohol. Process parameters of particulate formed in the alcohol-purification, such as the number, density and sedimentation velocity, were also determined to define the sedimentation time and well understand the process. The purification process was optimized as that alcohol is added to the concentrated extract solution (drug material) to certain concentration for 2 times and deposited the alcohol-solution containing drug-material to sediment for some time, i.e. 60% alcohol deposited for 36 hours, filter and then 80% -90% alcohol deposited for 6 hours in turn. The content of total alkaloids was decreased a little during the depositing process. The average settling time of particles with the diameters of 10, 25 microm were 157.7, 25.2 h in the first alcohol-purified process, and 84.2, 13.5 h in the second alcohol-purified process, respectively. The optimized alcohol-purification process remains the marker compositions better and compared with the initial process, it's time saving and much economy. The manufacturing quality of TCM-injection can be controlled by process. PAT pattern must be designed under the well understanding of process of TCM production.
NASA Astrophysics Data System (ADS)
Migneco, E.; Aiello, S.; Amato, E.; Ambriola, M.; Ameli, F.; Andronico, G.; Anghinolfi, M.; Battaglieri, M.; Bellotti, R.; Bersani, A.; Boldrin, A.; Bonori, M.; Cafagna, F.; Capone, A.; Caponnetto, L.; Chiarusi, T.; Circella, M.; Cocimano, R.; Coniglione, R.; Cordelli, M.; Costa, M.; Cuneo, S.; D'Amico, A.; D'Amico, V.; De Marzo, C.; De Vita, R.; Distefano, C.; Gabrielli, A.; Gandolfi, E.; Grimaldi, A.; Habel, R.; Italiano, A.; Leonardi, M.; Lo Nigro, L.; Lo Presti, D.; Margiotta, A.; Martini, A.; Masetti, M.; Masullo, R.; Montaruli, T.; Mosetti, R.; Musumeci, M.; Nicolau, C. A.; Occhipinti, R.; Papaleo, R.; Petta, C.; Piattelli, P.; Raia, G.; Randazzo, N.; Reito, S.; Ricco, G.; Riccobene, G.; Ripani, M.; Romita, M.; Rovelli, A.; Ruppi, M.; Russo, G. V.; Russo, M.; Sapienza, P.; Schuller, J. P.; Sedita, M.; Sokalski, I.; Spurio, M.; Taiuti, M.; Trasatti, L.; Ursella, L.; Valente, V.; Vicini, P.; Zanarini, G.
2004-11-01
The activities towards the realisation of a km3 Cherenkov neutrino detector, carried out by the NEMO Collaboration are described. Long term exploration of a 3500 m deep site close to the Sicilian coast has shown that it is optimal for the installation of the detector. A complete feasibility study, that has considered all the components of the detector as well as its deployment, has been carried out demonstrating that technological solutions exist for the realization of an underwater km3 detector. The realization of a technological demonstrator (the NEMO Phase 1 project) is under way.
Precedent approach to the formation of programs for cyclic objects control
NASA Astrophysics Data System (ADS)
Kulakov, S. M.; Trofimov, V. B.; Dobrynin, A. S.; Taraborina, E. N.
2018-05-01
The idea and procedure for formalizing the precedent method of formation of complex control solutions (complex control programs) is discussed with respect to technological or organizational objects, the operation of which is organized cyclically. A typical functional structure of the system of precedent control by complex technological unit is developed, including a subsystem of retrospective optimization of actually implemented control programs. As an example, the problem of constructing replaceable planograms for the operation of the link of a heading-and-winning machine on the basis of precedents is considered.
Addressing the minimum fleet problem in on-demand urban mobility.
Vazifeh, M M; Santi, P; Resta, G; Strogatz, S H; Ratti, C
2018-05-01
Information and communication technologies have opened the way to new solutions for urban mobility that provide better ways to match individuals with on-demand vehicles. However, a fundamental unsolved problem is how best to size and operate a fleet of vehicles, given a certain demand for personal mobility. Previous studies 1-5 either do not provide a scalable solution or require changes in human attitudes towards mobility. Here we provide a network-based solution to the following 'minimum fleet problem', given a collection of trips (specified by origin, destination and start time), of how to determine the minimum number of vehicles needed to serve all the trips without incurring any delay to the passengers. By introducing the notion of a 'vehicle-sharing network', we present an optimal computationally efficient solution to the problem, as well as a nearly optimal solution amenable to real-time implementation. We test both solutions on a dataset of 150 million taxi trips taken in the city of New York over one year 6 . The real-time implementation of the method with near-optimal service levels allows a 30 per cent reduction in fleet size compared to current taxi operation. Although constraints on driver availability and the existence of abnormal trip demands may lead to a relatively larger optimal value for the fleet size than that predicted here, the fleet size remains robust for a wide range of variations in historical trip demand. These predicted reductions in fleet size follow directly from a reorganization of taxi dispatching that could be implemented with a simple urban app; they do not assume ride sharing 7-9 , nor require changes to regulations, business models, or human attitudes towards mobility to become effective. Our results could become even more relevant in the years ahead as fleets of networked, self-driving cars become commonplace 10-14 .
Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo
2016-08-31
Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.
Fan, Dong-Dong; Kuang, Yan-Hui; Dong, Li-Hua; Ye, Xiao; Chen, Liang-Mian; Zhang, Dong; Ma, Zhen-Shan; Wang, Jin-Yu; Zhu, Jing-Jing; Wang, Zhi-Min; Wang, De-Qin; Li, Chu-Yuan
2017-04-01
To optimize the purification process of gynostemma pentaphyllum saponins (GPS) based on "adjoint marker" online control technology with GPS as the testing index. UPLC-QTOF-MS technology was used for qualitative analysis. "Adjoint marker" online control results showed that the end point of load sample was that the UV absorbance of effluent liquid was equal to half of that of load sample solution, and the absorbance was basically stable when the end point was stable. In UPLC-QTOF-MS qualitative analysis, 16 saponins were identified from GPS, including 13 known gynostemma saponins and 3 new saponins. This optimized method was proved to be simple, scientific, reasonable, easy for online determination, real-time record, and can be better applied to the mass production and automation of production. The results of qualitative analysis indicated that the "adjoint marker" online control technology can well retain main efficacy components of medicinal materials, and provide analysis tools for the process control and quality traceability. Copyright© by the Chinese Pharmaceutical Association.
Optimization of controlled processes in combined-cycle plant (new developments and researches)
NASA Astrophysics Data System (ADS)
Tverskoy, Yu S.; Muravev, I. K.
2017-11-01
All modern complex technical systems, including power units of TPP and nuclear power plants, work in the system-forming structure of multifunctional APCS. The development of the modern APCS mathematical support allows bringing the automation degree to the solution of complex optimization problems of equipment heat-mass-exchange processes in real time. The difficulty of efficient management of a binary power unit is related to the need to solve jointly at least three problems. The first problem is related to the physical issues of combined-cycle technologies. The second problem is determined by the criticality of the CCGT operation to changes in the regime and climatic factors. The third problem is related to a precise description of a vector of controlled coordinates of a complex technological object. To obtain a joint solution of this complex of interconnected problems, the methodology of generalized thermodynamic analysis, methods of the theory of automatic control and mathematical modeling are used. In the present report, results of new developments and studies are shown. These results allow improving the principles of process control and the automatic control systems structural synthesis of power units with combined-cycle plants that provide attainable technical and economic efficiency and operational reliability of equipment.
Cost and surface optimization of a remote photovoltaic system for two kinds of panels' technologies
NASA Astrophysics Data System (ADS)
Avril, S.; Arnaud, G.; Colin, H.; Montignac, F.; Mansilla, C.; Vinard, M.
2011-10-01
Stand alone photovoltaic (PV) systems comprise one of the promising electrification solutions to cover the demand of remote consumers, especially when it is coupled with a storage solution that would both increase the productivity of power plants and reduce the areas dedicated to energy production. This short communication presents a multi-objective design of a remote PV system coupled to battery and hydrogen storages systems simultaneously minimizing the total levelized cost and the occupied area, while fulfilling a constraint of consumer satisfaction. For this task, a multi-objective code based on particle swarm optimization has been used to find the best combination of different energy devices. Both short and mid terms based on forecasts assumptions have been investigated. An application for the site of La Nouvelle in the French overseas island of La Réunion is proposed. It points up a strong cost advantage by using Heterojunction with Intrinsic Thin layer (HIT) rather than crystalline silicon (c-Si) cells for the short term. However, the discrimination between these two PV cell technologies is less obvious for the mid term: a strong constraint on the occupied area will promote HIT, whereas a strong constraint on the cost will promote c-Si.
Novel technical solutions for wireless ECG transmission & analysis in the age of the internet cloud.
Al-Zaiti, Salah S; Shusterman, Vladimir; Carey, Mary G
2013-01-01
Current guidelines recommend early reperfusion therapy for ST-elevation myocardial infarction (STEMI) within 90 min of first medical encounter. Telecardiology entails the use of advanced communication technologies to transmit the prehospital 12-lead electrocardiogram (ECG) to offsite cardiologists for early triage to the cath lab; which has been shown to dramatically reduce door-to-balloon time and total mortality. However, hospitals often find adopting ECG transmission technologies very challenging. The current review identifies seven major technical challenges of prehospital ECG transmission, including: paramedics inconvenience and transport delay; signal noise and interpretation errors; equipment malfunction and transmission failure; reliability of mobile phone networks; lack of compliance with the standards of digital ECG formats; poor integration with electronic medical records; and costly hardware and software pre-requisite installation. Current and potential solutions to address each of these technical challenges are discussed in details and include: automated ECG transmission protocols; annotatable waveform-based ECGs; optimal routing solutions; and the use of cloud computing systems rather than vendor-specific processing stations. Nevertheless, strategies to monitor transmission effectiveness and patient outcomes are essential to sustain initial gains of implementing ECG transmission technologies. © 2013.
Genetic Algorithm Optimizes Q-LAW Control Parameters
NASA Technical Reports Server (NTRS)
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
Song, Shang; Roy, Shuvo
2018-01-01
Macroencapsulation technology has been an attractive topic in the field of treatment for Type 1 diabetes due to mechanical stability, versatility, and retrievability of the macrocapsule design. Macro-capsules can be categorized into extravascular and intravascular devices, in which solute transport relies either on diffusion or convection, respectively. Failure of macroencapsulation strategies can be due to limited regenerative capacity of the encased insulin-producing cells, sub-optimal performance of encapsulation biomaterials, insufficient immunoisolation, excessive blood thrombosis for vascular perfusion devices, and inadequate modes of mass transfer to support cell viability and function. However, significant technical advancements have been achieved in macroencapsulation technology, namely reducing diffusion distance for oxygen and nutrients, using pro-angiogenic factors to increase vascularization for islet engraftment, and optimizing membrane permeability and selectivity to prevent immune attacks from host’s body. This review presents an overview of existing macroencapsulation devices and discusses the advances based on tissue-engineering approaches that will stimulate future research and development of macroencapsulation technology. PMID:26615050
A novel harmony search-K means hybrid algorithm for clustering gene expression data
Nazeer, KA Abdul; Sebastian, MP; Kumar, SD Madhu
2013-01-01
Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms. PMID:23390351
A novel harmony search-K means hybrid algorithm for clustering gene expression data.
Nazeer, Ka Abdul; Sebastian, Mp; Kumar, Sd Madhu
2013-01-01
Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms.
Beating the limits with initial correlations
NASA Astrophysics Data System (ADS)
Basilewitsch, Daniel; Schmidt, Rebecca; Sugny, Dominique; Maniscalco, Sabrina; Koch, Christiane P.
2017-11-01
Fast and reliable reset of a qubit is a key prerequisite for any quantum technology. For real world open quantum systems undergoing non-Markovian dynamics, reset implies not only purification, but in particular erasure of initial correlations between qubit and environment. Here, we derive optimal reset protocols using a combination of geometric and numerical control theory. For factorizing initial states, we find a lower limit for the entropy reduction of the qubit as well as a speed limit. The time-optimal solution is determined by the maximum coupling strength. Initial correlations, remarkably, allow for faster reset and smaller errors. Entanglement is not necessary.
On-board autonomous attitude maneuver planning for planetary spacecraft using genetic algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2003-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This paper presents an approach for attitude path planning that makes full use of a priori constraint knowledge and is computationally tractable enough to be executed on-board a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used 'as is' or as an initial solution to initialize additional deterministic optimization algorithms. A number of example simulations are presented including the case examples of a generic Europa Orbiter spacecraft in cruise as well as in orbit around Europa. The search times are typically on the order of minutes, thus demonstrating the viability of the presented approach. The results are applicable to all future deep space missions where greater spacecraft autonomy is required. In addition, onboard autonomous attitude planning greatly facilitates navigation and science observation planning, benefiting thus all missions to planet Earth as well.
Forward osmosis :a new approach to water purification and desalination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, James Edward; Evans, Lindsey R.
2006-07-01
Fresh, potable water is an essential human need and thus looming water shortages threaten the world's peace and prosperity. Waste water, brackish water, and seawater have great potential to fill the coming requirements. Unfortunately, the ability to exploit these resources is currently limited in many parts of the world by both the cost of the energy and the investment in equipment required for purification/desalination. Forward (or direct) osmosis is an emerging process for dewatering aqueous streams that might one day help resolve this problem. In FO, water from one solution selectively passes through a membrane to a second solution basedmore » solely on the difference in the chemical potential (concentration) of the two solutions. The process is spontaneous, and can be accomplished with very little energy expenditure. Thus, FO can be used, in effect, to exchange one solute for a different solute, specifically chosen for its chemical or physical properties. For desalination applications, the salts in the feed stream could be exchanged for an osmotic agent specifically chosen for its ease of removal, e.g. by precipitation. This report summarizes work performed at Sandia National Laboratories in the area of FO and reviews the status of the technology for desalination applications. At its current state of development, FO will not replace reverse osmosis (RO) as the most favored desalination technology, particularly for routine waters. However, a future role for FO is not out of the question. The ability to treat waters with high solids content or fouling potential is particularly attractive. Although our analysis indicates that FO is not cost effective as a pretreatment for conventional BWRO, water scarcity will likely drive societies to recover potable water from increasingly marginal resources, for example gray water and then sewage. In this context, FO may be an attractive pretreatment alternative. To move the technology forward, continued improvement and optimization of membranes is recommended. The identification of optimal osmotic agents for different applications is also suggested as it is clear that the space of potential agents and recovery processes has not been fully explored.« less
NASA Astrophysics Data System (ADS)
Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi
2013-02-01
The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.
NASA Astrophysics Data System (ADS)
Kumarapperuma, Lakshitha; Premaratne, Malin; Jha, Pankaj K.; Stockman, Mark I.; Agrawal, Govind P.
2018-05-01
We demonstrate that it is possible to derive an approximate analytical expression to characterize the spasing (L-L) curve of a coherently enhanced spaser with 3-level gain-medium chromophores. The utility of this solution stems from the fact that it enables optimization of the large parameter space associated with spaser designing, a functionality not offered by the methods currently available in the literature. This is vital for the advancement of spaser technology towards the level of device realization. Owing to the compact nature of the analytical expressions, our solution also facilitates the grouping and identification of key processes responsible for the spasing action, whilst providing significant physical insights. Furthermore, we show that our expression generates results within 0.1% error compared to numerically obtained results for pumping rates higher than the spasing threshold, thereby drastically reducing the computational cost associated with spaser designing.
Sensitivity Analysis and Optimization of the Nuclear Fuel Cycle: A Systematic Approach
NASA Astrophysics Data System (ADS)
Passerini, Stefano
For decades, nuclear energy development was based on the expectation that recycling of the fissionable materials in the used fuel from today's light water reactors into advanced (fast) reactors would be implemented as soon as technically feasible in order to extend the nuclear fuel resources. More recently, arguments have been made for deployment of fast reactors in order to reduce the amount of higher actinides, hence the longevity of radioactivity, in the materials destined to a geologic repository. The cost of the fast reactors, together with concerns about the proliferation of the technology of extraction of plutonium from used LWR fuel as well as the large investments in construction of reprocessing facilities have been the basis for arguments to defer the introduction of recycling technologies in many countries including the US. In this thesis, the impacts of alternative reactor technologies on the fuel cycle are assessed. Additionally, metrics to characterize the fuel cycles and systematic approaches to using them to optimize the fuel cycle are presented. The fuel cycle options of the 2010 MIT fuel cycle study are re-examined in light of the expected slower rate of growth in nuclear energy today, using the CAFCA (Code for Advanced Fuel Cycle Analysis). The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycle options available in the future. The options include limited recycling in L WRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. Additional fuel cycle scenarios presented for the first time in this work assume the deployment of innovative recycling reactor technologies such as the Reduced Moderation Boiling Water Reactors and Uranium-235 initiated Fast Reactors. A sensitivity study focused on system and technology parameters of interest has been conducted to test the robustness of the conclusions presented in the MIT Fuel Cycle Study. These conclusions are found to still hold, even when considering alternative technologies and different sets of simulation assumptions. Additionally, a first of a kind optimization scheme for the nuclear fuel cycle analysis is proposed and the applications of such an optimization are discussed. Optimization metrics of interest for different stakeholders in the fuel cycle (economics, fuel resource utilization, high level waste, transuranics/proliferation management, and environmental impact) are utilized for two different optimization techniques: a linear one and a stochastic one. Stakeholder elicitation provided sets of relative weights for the identified metrics appropriate to each stakeholder group, which were then successfully used to arrive at optimum fuel cycle configurations for recycling technologies. The stochastic optimization tool, based on a genetic algorithm, was used to identify non-inferior solutions according to Pareto's dominance approach to optimization. The main tradeoff for fuel cycle optimization was found to be between economics and most of the other identified metrics. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
NASA Astrophysics Data System (ADS)
Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon
2016-06-01
One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.
NASA Astrophysics Data System (ADS)
Li, Zheng-Yan; Xie, Zheng-Wei; Chen, Tong; Ouyang, Qi
2009-12-01
Constraint-based models such as flux balance analysis (FBA) are a powerful tool to study biological metabolic networks. Under the hypothesis that cells operate at an optimal growth rate as the result of evolution and natural selection, this model successfully predicts most cellular behaviours in growth rate. However, the model ignores the fact that cells can change their cellular metabolic states during evolution, leaving optimal metabolic states unstable. Here, we consider all the cellular processes that change metabolic states into a single term 'noise', and assume that cells change metabolic states by randomly walking in feasible solution space. By simulating a state of a cell randomly walking in the constrained solution space of metabolic networks, we found that in a noisy environment cells in optimal states tend to travel away from these points. On considering the competition between the noise effect and the growth effect in cell evolution, we found that there exists a trade-off between these two effects. As a result, the population of the cells contains different cellular metabolic states, and the population growth rate is at suboptimal states.
Optimal Electric Vehicle Scheduling: A Co-Optimized System and Customer Perspective
NASA Astrophysics Data System (ADS)
Maigha
Electric vehicles provide a two pronged solution to the problems faced by the electricity and transportation sectors. They provide a green, highly efficient alternative to the internal combustion engine vehicles, thus reducing our dependence on fossil fuels. Secondly, they bear the potential of supporting the grid as energy storage devices while incentivising the customers through their participation in energy markets. Despite these advantages, widespread adoption of electric vehicles faces socio-technical and economic bottleneck. This dissertation seeks to provide solutions that balance system and customer objectives under present technological capabilities. The research uses electric vehicles as controllable loads and resources. The idea is to provide the customers with required tools to make an informed decision while considering the system conditions. First, a genetic algorithm based optimal charging strategy to reduce the impact of aggregated electric vehicle load has been presented. A Monte Carlo based solution strategy studies change in the solution under different objective functions. This day-ahead scheduling is then extended to real-time coordination using a moving-horizon approach. Further, battery degradation costs have been explored with vehicle-to-grid implementations, thus accounting for customer net-revenue and vehicle utility for grid support. A Pareto front, thus obtained, provides the nexus between customer and system desired operating points. Finally, we propose a transactive business model for a smart airport parking facility. This model identifies various revenue streams and satisfaction indices that benefit the parking lot owner and the customer, thus adding value to the electric vehicle.
2005-06-01
153 E.3 Infrared NIIRS April 1996...system requirements. Four separate NIIRS scales have been developed: Radar NIIRS, Visible NIIRS, Infrared (IR) NIIRS, and Multispectral (MS) NIIRS [23...34 <www.spaceline.org> Accessed 5 March 2005. [33] "History of Radar." Physics, Science and Technology. BBC. <http://www.bbc.co.uk/dna/ ww2 /A591545> Accessed 2
Development of multi-touch panel backlight system
NASA Astrophysics Data System (ADS)
Chomiczewski, J.; Długosz, M.; Godlewski, G.; Kochanowicz, M.
2013-10-01
The paper presents design, simulation analysis, and measurements of parameters of optical multi touch panel backlight system. Comparison of optical technology with commercially available solutions was also performed. The numerical simulation of laser based backlight system was made. The influence of the laser power, beam divergence, and placing reflective surfaces on the uniformity of illumination were examined. Optimal illumination system was used for further studies.
Pressure leaching of chalcopyrite concentrate
NASA Astrophysics Data System (ADS)
Aleksei, Kritskii; Kirill, Karimov; Stanislav, Naboichenko
2018-05-01
The results of chalcopyrite concentrate processing using low-temperature and high-temperature sulfuric acid pressure leaching are presented. A material of the following composition was used, 21.5 Cu, 0.1 Zn, 0.05 Pb, 0.04 Ni, 26.59 S, 24.52 Fe, 16.28 SiO2 (in wt.%). The influence of technological parameters on the degree of copper and iron extraction into the leach solution was studied in the wide range of values. The following conditions were suggested as the optimal for the high-temperature pressure leaching: t = 190 °C, PO2 = 0.5 MPa, CH2SO4 = 15 g/L, L:S = 6:1. At the mentioned parameters, it is possible to extract at least 98% Cu from concentrate into the leaching solution during 100 minutes. The following conditions were suggested as optimal for the low-temperature pressure leaching: t = 105 °C, PO2 = 1.3-1.5 MPa, CH2SO4 = 90 g/L, L:S = 10:1. At the mentioned parameters, it is possible to extract up to 83% Cu from the concentrate into the leach solution during 300-360 minutes.
Adaptive building skin structures
NASA Astrophysics Data System (ADS)
Del Grosso, A. E.; Basso, P.
2010-12-01
The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.
Process Approach for Modeling of Machine and Tractor Fleet Structure
NASA Astrophysics Data System (ADS)
Dokin, B. D.; Aletdinova, A. A.; Kravchenko, M. S.; Tsybina, Y. S.
2018-05-01
The existing software complexes on modelling of the machine and tractor fleet structure are mostly aimed at solving the task of optimization. However, the creators, choosing only one optimization criterion and incorporating it in their software, provide grounds on why it is the best without giving a decision maker the opportunity to choose it for their enterprise. To analyze “bottlenecks” of machine and tractor fleet modelling, the authors of this article created a process model, in which they included adjustment to the plan of using machinery based on searching through alternative technologies. As a result, the following recommendations for software complex development have been worked out: the introduction of a database of alternative technologies; the possibility for a user to change the timing of the operations even beyond the allowable limits and in that case the calculation of the incurred loss; the possibility to rule out the solution of an optimization task, and if there is a necessity in it - the possibility to choose an optimization criterion; introducing graphical display of an annual complex of works, which could be enough for the development and adjustment of a business strategy.
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2017-09-01
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Topology optimization of hyperelastic structures using a level set method
NASA Astrophysics Data System (ADS)
Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.
2017-12-01
Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.
Model-based assist feature insertion for sub-40nm memory device
NASA Astrophysics Data System (ADS)
Suh, Sungsoo; Lee, Suk-joo; Choi, Seong-woon; Lee, Sung-Woo; Park, Chan-hoon
2009-04-01
Many issues need to be resolved for a production-worthy model based assist feature insertion flow for single and double exposure patterning process to extend low k1 process at 193 nm immersion technology. Model based assist feature insertion is not trivial to implement either for single and double exposure patterning compared to rule based methods. As shown in Fig. 1, pixel based mask inversion technology in itself has difficulties in mask writing and inspection although it presents as one of key technology to extend single exposure for contact layer. Thus far, inversion technology is tried as a cooptimization of target mask to simultaneously generate optimized main and sub-resolution assists features for a desired process window. Alternatively, its technology can also be used to optimize for a target feature after an assist feature types are inserted in order to simplify the mask complexity. Simplification of inversion mask is one of major issue with applying inversion technology to device development even if a smaller mask feature can be fabricated since the mask writing time is also a major factor. As shown in Figure 2, mask writing time may be a limiting factor in determining whether or not an inversion solution is viable. It can be reasoned that increased number of shot counts relates to increase in margin for inversion methodology. On the other hand, there is a limit on how complex a mask can be in order to be production worthy. There is also source and mask co-optimization which influences the final mask patterns and assist feature sizes and positions for a given target. In this study, we will discuss assist feature insertion methods for sub 40-nm technology.
Information technologies in optimization process of monitoring of software and hardware status
NASA Astrophysics Data System (ADS)
Nikitin, P. V.; Savinov, A. N.; Bazhenov, R. I.; Ryabov, I. V.
2018-05-01
The article describes a model of a hardware and software monitoring system for a large company that provides customers with software as a service (SaaS solution) using information technology. The main functions of the monitoring system are: provision of up-todate data for analyzing the state of the IT infrastructure, rapid detection of the fault and its effective elimination. The main risks associated with the provision of these services are described; the comparative characteristics of the software are given; author's methods of monitoring the status of software and hardware are proposed.
NASA Astrophysics Data System (ADS)
Belániová, Barbora; Antošová, Naďa
2017-06-01
The theme of improvement thermal proprieties of external cladding according to the New EU Directive is still a hot topic, which needs to be answered necessarily till December 2020. Maintenance and repair of existing ETICS became to also an actual open theme in search solutions for existing constructions. The aim of the research in this review is to analyze influence of layers the alternative thermal materials in technology "double thermal insulation". Humidity and temperature conditions will be further examined in connection with the development and colonization of microorganisms on surface construction.
System engineering for image and video systems
NASA Astrophysics Data System (ADS)
Talbot, Raymond J., Jr.
1997-02-01
The National Law Enforcement and Corrections Technology Centers (NLECTC) support public law enforcement agencies with technology development, evaluation, planning, architecture, and implementation. The NLECTC Western Region has a particular emphasis on surveillance and imaging issues. Among its activities, working with government and industry, NLECTC-WR produces 'Guides to Best Practices and Acquisition Methodologies' that facilitate government organizations in making better informed purchasing and operational decisions. This presentation includes specific examples from current activities. Through these systematic procedures, it is possible to design solutions optimally matched to the desired outcomes and provide a process for continuous improvement and greater public awareness of success.
NASA Technical Reports Server (NTRS)
Young, Larry A.; Pisanich, Gregory; Ippolito, Corey; Alena, Rick
2005-01-01
The objective of this paper is to review the anticipated imaging and remote-sensing technology requirements for aerial vehicle survey missions to other planetary bodies in our Solar system that can support in-atmosphere flight. In the not too distant future such planetary aerial vehicle (a.k.a. aerial explorers) exploration missions will become feasible. Imaging and remote-sensing observations will be a key objective for these missions. Accordingly, it is imperative that optimal solutions in terms of imaging acquisition and real-time autonomous analysis of image data sets be developed for such vehicles.
Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale
Engelmann, Christian; Hukerikar, Saurabh
2017-09-01
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less
Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Hukerikar, Saurabh
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less
Exact algorithms for haplotype assembly from whole-genome sequence data.
Chen, Zhi-Zhong; Deng, Fei; Wang, Lusheng
2013-08-15
Haplotypes play a crucial role in genetic analysis and have many applications such as gene disease diagnoses, association studies, ancestry inference and so forth. The development of DNA sequencing technologies makes it possible to obtain haplotypes from a set of aligned reads originated from both copies of a chromosome of a single individual. This approach is often known as haplotype assembly. Exact algorithms that can give optimal solutions to the haplotype assembly problem are highly demanded. Unfortunately, previous algorithms for this problem either fail to output optimal solutions or take too long time even executed on a PC cluster. We develop an approach to finding optimal solutions for the haplotype assembly problem under the minimum-error-correction (MEC) model. Most of the previous approaches assume that the columns in the input matrix correspond to (putative) heterozygous sites. This all-heterozygous assumption is correct for most columns, but it may be incorrect for a small number of columns. In this article, we consider the MEC model with or without the all-heterozygous assumption. In our approach, we first use new methods to decompose the input read matrix into small independent blocks and then model the problem for each block as an integer linear programming problem, which is then solved by an integer linear programming solver. We have tested our program on a single PC [a Linux (x64) desktop PC with i7-3960X CPU], using the filtered HuRef and the NA 12878 datasets (after applying some variant calling methods). With the all-heterozygous assumption, our approach can optimally solve the whole HuRef data set within a total time of 31 h (26 h for the most difficult block of the 15th chromosome and only 5 h for the other blocks). To our knowledge, this is the first time that MEC optimal solutions are completely obtained for the filtered HuRef dataset. Moreover, in the general case (without the all-heterozygous assumption), for the HuRef dataset our approach can optimally solve all the chromosomes except the most difficult block in chromosome 15 within a total time of 12 days. For both of the HuRef and NA12878 datasets, the optimal costs in the general case are sometimes much smaller than those in the all-heterozygous case. This implies that some columns in the input matrix (after applying certain variant calling methods) still correspond to false-heterozygous sites. Our program, the optimal solutions found for the HuRef dataset available at http://rnc.r.dendai.ac.jp/hapAssembly.html.
RET selection on state-of-the-art NAND flash
NASA Astrophysics Data System (ADS)
Lafferty, Neal V.; He, Yuan; Pei, Jinhua; Shao, Feng; Liu, QingWei; Shi, Xuelong
2015-03-01
We present results generated using a new gauge-based Resolution Enhancement Technique (RET) Selection flow during the technology set up phase of a 3x-node NAND Flash product. As a testcase, we consider a challenging critical level for this ash product. The RET solutions include inverse lithography technology (ILT) optimized masks with sub-resolution assist features (SRAF) and companion illumination sources developed using a new pixel based Source Mask Optimization (SMO) tool that uses measurement gauges as a primary input. The flow includes verification objectives which allow tolerancing of particular measurement gauges based on lithographic criteria. Relative importance for particular gauges may also be set, to aid in down-selection from several candidate sources. The end result is a sensitive, objective score of RET performance. Using these custom-defined importance metrics, decisions on the final RET style can be made in an objective way.
Telemedicine endurance - empowering care recipients in Asian Telemedicine setup.
Bhatia, Jagjit Singh; Sharma, Sagri
2008-01-01
To optimize the medical resources in India and Asia empowering the patients with e-health services in order to justify the multi-specialty healthcare provided to the rural and remote areas is the need of the hour. C-DAC Mohali is working in this direction since 1999 and deployed the technology for the amelioration of the rural Indian population. We bring home the bacon by a broad spectrum of professional quality Telemedicine products and customized solutions that fit within any budget constraint. We're experts in telemedicine conferencing and can help find the right solution to empower the patient with the essentials tools. Through this paper a description of the results from our Telemedicine projects, with substantiating and quantifiable data is put forth. Indian Telemedicine establishments need periodic evaluation to rationalize the main objective of the technology i.e. patient care, patient satisfaction, and patient opinion - in one word - patient empowerment.
High-Capacity Hydrogen-Based Green-Energy Storage Solutions For The Grid Balancing
NASA Astrophysics Data System (ADS)
D'Errico, F.; Screnci, A.
One of the current main challenges in green-power storage and smart grids is the lack of effective solutions for accommodating the unbalance between renewable energy sources, that offer intermittent electricity supply, and a variable electricity demand. Energy management systems have to be foreseen for the near future, while they still represent a major challenge. Integrating intermittent renewable energy sources, by safe and cost-effective energy storage systems based on solid state hydrogen is today achievable thanks to recently some technology breakthroughs. Optimized solid storage method made of magnesium-based hydrides guarantees a very rapid absorption and desorption kinetics. Coupled with electrolyzer technology, high-capacity storage of green-hydrogen is therefore practicable. Besides these aspects, magnesium has been emerging as environmentally friend energy storage method to sustain integration, monitoring and control of large quantity of GWh from high capacity renewable generation in the EU.
High-Capacity Hydrogen-Based Green-Energy Storage Solutions for the Grid Balancing
NASA Astrophysics Data System (ADS)
D'Errico, F.; Screnci, A.
One of the current main challenges in green-power storage and smart grids is the lack of effective solutions for accommodating the unbalance between renewable energy sources, that offer intermittent electricity supply, and a variable electricity demand. Energy management systems have to be foreseen for the near future, while they still represent a major challenge. Integrating intermittent renewable energy sources, by safe and cost-effective energy storage systems based on solid state hydrogen is today achievable thanks to recently some technology breakthroughs. Optimized solid storage method made of magnesium-based hydrides guarantees a very rapid absorption and desorption kinetics. Coupled with electrolyzer technology, high-capacity storage of green-hydrogen is therefore practicable. Besides these aspects, magnesium has been emerging as environmentally friend energy storage method to sustain integration, monitoring and control of large quantity of GWh from high capacity renewable generation in the EU.
NASA Astrophysics Data System (ADS)
Zhilenkov, A. A.; Chernyi, S. G.; Nyrkov, A. P.; Sokolov, S. S.
2017-10-01
Nitrides of group III elements are a very suitable basis for deriving light-emitting devices with the radiating modes lengths of 200-600 nm. The use of such semiconductors allows obtaining full-color RGB light sources, increasing record density of a digital data storage device, getting high-capacity and efficient sources of white light. Electronic properties of such semi-conductors allow using them as a basis for high-power and high-frequency transistors and other electronic devices, the specifications of which are competitive with those of SiC-based devices. Only since 2000, the technology of cultivation of crystals III-N of group has come to the level of wide recognition by both abstract science, and the industry that has led to the creation of the multi-billion dollar market. And this is despite a rather low level of development of the production technology of devices on the basis of III-N of materials. The progress that has happened in the last decade requires the solution of the main problem, constraining further development of this technology today - ensuring cultivation of III-N structures of necessary quality. For this purpose, it is necessary to solve problems of the analysis and optimization of processes in installations of epitaxial growth, and, as a result, optimization of its constructions.
NASA Astrophysics Data System (ADS)
Koneva, M. S.; Rudenko, O. V.; Usatikov, S. V.; Bugayets, N. A.; Tamova, M. Yu; Fedorova, M. A.
2018-05-01
The increase in the efficiency of the "numerical" technology for solving computational problems of parametric optimization of the technological process of hydroponic germination of wheat grains is considered. In this situation, the quality criteria are contradictory and a part of them is given by implicit functions of many variables. One of the main stages, soaking, determining the time and quality of germinated wheat grain is studied, when grain receives the required amount of moisture and air oxygen for germination and subsequently accumulates enzymes. A solution algorithm for this problem is suggested implemented by means of software packages Statistica v.10 and MathCAD v.15. The use of the proposed mathematical models describing the processes of hydroponic soaking of spring soft wheat varieties made it possible to determine optimal conditions of germination. The results of investigations show that the type of aquatic environment used for soaking has a great influence on the process of water absorption, especially the chemical composition of the germinated material. The use of the anolyte of electrochemically activated water (ECHA-water) intensifies the process from 5.83 to 4 hours for wheat variety «Altayskaya 105» and from 13 to 8.8 hours - for «Pobla Runo».
A recourse-based solution approach to the design of fuel cell aeropropulsion systems
NASA Astrophysics Data System (ADS)
Choi, Taeyun Paul
In order to formulate a nondeterministic solution approach that capitalizes on the practice of compensatory design, this research introduces the notion of recourse. Within the context of engineering an aerospace system, recourse is defined as a set of corrective actions that can be implemented in stages later than the current design phase to keep critical system-level figures of merit within the desired target ranges, albeit at some penalty. Recourse programs also introduce the concept of stages to optimization formulations, and allow each stage to encompass as many sequences or events as determined necessary to solve the problem at hand. A two-part strategy, which partitions the design activities into stages, is proposed to model the bi-phasal nature of recourse. The first stage is defined as the time period in which an a priori design is identified before the exact values of the uncertain parameters are known. In contrast, the second stage is a period occurring some time after the first stage, when an a posteriori correction can be made to the first-stage design, should the realization of uncertainties impart infeasibilities. Penalizing costs are attached to the second-stage corrections to reflect the reality that getting it done right the first time is almost always less costly than fixing it after the fact. Consequently, the goal of the second stage becomes identifying an optimal solution with respect to the second-stage penalty, given the first-stage design, as well as a particular realization of the random parameters. This two-stage model is intended as an analogue of the traditional practice of monitoring and managing key Technical Performance Measures (TPMs) in aerospace systems development settings. One obvious weakness of the two-stage strategy as presented above is its limited applicability as a forecasting tool. Not only cannot the second stage be invoked without a first-stage starting point, but also the second-stage solution differs from one specific outcome of uncertainties to another. On the contrary, what would be more valuable given the time-phased nature of engineering design is the capability to perform an anticipatory identification of an optimum that is also expected to incur the least costly recourse option in the future. It is argued that such a solution is in fact a more balanced alternative than robust, probabilistically maximized, or chance-constrained solutions, because it represents trading the design optimality in the present with the potential costs of future recourse. Therefore, it is further proposed that the original two-stage model be embedded inside a larger design loop, so that the realization of numerous recourse scenarios can be simulated for a given first-stage design. The repetitive procedure at the second stage is necessary for computing the expected cost of recourse, which is equivalent to its mathematical expectation as per the strong law of large numbers. The feedback loop then communicates this information to the aggregate-level optimizer, whose objective is to minimize the sum total of the first-stage metric and the expected cost of future corrective actions. The resulting stochastic solution is a design that is well-hedged against the uncertain consequences of later design phases, while at the same time being less conservative than a solution designed to more traditional deterministic standards. As a proof-of-concept demonstration, the recourse-based solution approach is presented as applied to a contemporary aerospace engineering problem of interest - the integration of fuel cell technology into uninhabited aerial systems. The creation of a simulation environment capable of designing three system alternatives based on Proton Exchange Membrane Fuel Cell (PEMFC) technology and another three systems leveraging upon Solid Oxide Fuel Cell (SOFC) technology is presented as the means to notionally emulate the development process of this revolutionary aeropropulsion method. Notable findings from the deterministic trade studies and algorithmic investigation include the incompatibility of the SOFC based architectures with the conceived maritime border patrol mission, as well as the thermodynamic scalability of the PEMFC based alternatives. It is the latter finding which justifies the usage of the more practical specific-parameter based approach in synthesizing the design results at the propulsion level into the overall aircraft sizing framework. The ensuing presentation on the stochastic portion of the implementation outlines how the selective applications of certain Design of Experiments, constrained optimization, Surrogate Modeling, and Monte Carlo sampling techniques enable the visualization of the objective function space. The particular formulations of the design stages, recourse, and uncertainties proposed in this research are shown to result in solutions that are well compromised between unfounded optimism and unwarranted conservatism. In all stochastic optimization cases, the Value of Stochastic Solution (VSS) proves to be an intuitively appealing measure of accounting for recourse-causing uncertainties in an aerospace systems design environment. (Abstract shortened by UMI.)
Earth-to-Orbit Laser Launch Simulation for a Lightcraft Technology Demonstrator
NASA Astrophysics Data System (ADS)
Richard, J. C.; Morales, C.; Smith, W. L.; Myrabo, L. N.
2006-05-01
Optimized laser launch trajectories have been developed for a 1.4 m diameter, 120 kg (empty mass) Lightcraft Technology Demonstrator (LTD). The lightcraft's combined-cycle airbreathing/rocket engine is designed for single-stage-to-orbit flights with a mass ratio of 2 propelled by a 100 MW class ground-based laser built on a 3 km mountain peak. Once in orbit, the vehicle becomes an autonomous micro-satellite. Two types of trajectories were simulated with the SORT (Simulation and Optimization of Rocket Trajectories) software package: a) direct GBL boost to orbit, and b) GBL boost aided by laser relay satellite. Several new subroutines were constructed for SORT to input engine performance (as a function of Mach number and altitude), vehicle aerodynamics, guidance algorithms, and mass history. A new guidance/steering option required the lightcraft to always point at the GBL or laser relay satellite. SORT iterates on trajectory parameters to optimize vehicle performance, achieve a desired criteria, or constrain the solution to avoid some specific limit. The predicted laser-boost performance for the LTD is undoubtedly revolutionary, and SORT simulations have helped to define this new frontier.
Post-Optimality Analysis In Aerospace Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.
1993-01-01
This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.
JPRS Report Science & Technology, Europe
1991-10-31
the solar system, the earth, and the conditions for life on earth, • To contribute to the solution of environmental prob- lems through satellite...requiring considerable additional R&D is to be stepped up. • Wind plants require about 10 years’ more R&D work. • Photovoltaics (PV) and solar ...Funding for active and passive solar energy exploita- tion. 5. Transport Sector • Optimizing means of transport (in manufacture and operation
Algorithm for designing smart factory Industry 4.0
NASA Astrophysics Data System (ADS)
Gurjanov, A. V.; Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.
2018-03-01
The designing task of production division of the Industry 4.0 item designing company is being studied. The authors proposed an algorithm, which is based on the modified V L Volkovich method. This algorithm allows generating options how to arrange the production with robotized technological equipment functioning in the automatic mode. The optimization solution of the multi-criteria task for some additive criteria is the base of the algorithm.
Research in Structures and Dynamics, 1984
NASA Technical Reports Server (NTRS)
Hayduk, R. J. (Compiler); Noor, A. K. (Compiler)
1984-01-01
A symposium on advanced and trends in structures and dynamics was held to communicate new insights into physical behavior and to identify trends in the solution procedures for structures and dynamics problems. Pertinent areas of concern were (1) multiprocessors, parallel computation, and database management systems, (2) advances in finite element technology, (3) interactive computing and optimization, (4) mechanics of materials, (5) structural stability, (6) dynamic response of structures, and (7) advanced computer applications.
ANN based Real-Time Estimation of Power Generation of Different PV Module Types
NASA Astrophysics Data System (ADS)
Syafaruddin; Karatepe, Engin; Hiyama, Takashi
Distributed generation is expected to become more important in the future generation system. Utilities need to find solutions that help manage resources more efficiently. Effective smart grid solutions have been experienced by using real-time data to help refine and pinpoint inefficiencies for maintaining secure and reliable operating conditions. This paper proposes the application of Artificial Neural Network (ANN) for the real-time estimation of the maximum power generation of PV modules of different technologies. An intelligent technique is necessary required in this case due to the relationship between the maximum power of PV modules and the open circuit voltage and temperature is nonlinear and can't be easily expressed by an analytical expression for each technology. The proposed ANN method is using input signals of open circuit voltage and cell temperature instead of irradiance and ambient temperature to determine the estimated maximum power generation of PV modules. It is important for the utility to have the capability to perform this estimation for optimal operating points and diagnostic purposes that may be an early indicator of a need for maintenance and optimal energy management. The proposed method is accurately verified through a developed real-time simulator on the daily basis of irradiance and cell temperature changes.
Use of osmotic dehydration to improve fruits and vegetables quality during processing.
Maftoonazad, Neda
2010-11-01
Osmotic treatment describes a preparation step to further processing of foods involving simultaneous transient moisture loss and solids gain when immersing in osmotic solutions, resulting in partial drying and improving the overall quality of food products. The different aspects of the osmotic dehydration (OD) technology namely the solutes employed, solutions characteristics used, process variables influence, as well as, the quality characteristics of the osmodehydrated products will be discussed in this review. As the process is carried out at mild temperatures and the moisture is removed by a liquid diffusion process, phase change that would be present in the other drying processes will be avoided, resulting in high quality products and may also lead to substantial energy savings. To optimize this process, modeling of the mass transfer phenomenon can improve high product quality. Several techniques such as microwave heating, vacuum, high pressure, pulsed electric field, etc. may be employed during or after osmotic treatment to enhance performance of the osmotic dehydration. Moreover new technologies used in osmotic dehydration will be discussed. Patents on osmotic dehydration of fruits and vegetables are also discussed in this article.
Solvent extraction of Cu, Mo, V, and U from leach solutions of copper ore and flotation tailings.
Smolinski, Tomasz; Wawszczak, Danuta; Deptula, Andrzej; Lada, Wieslawa; Olczak, Tadeusz; Rogowski, Marcin; Pyszynska, Marta; Chmielewski, Andrzej Grzegorz
2017-01-01
Flotation tailings from copper production are deposits of copper and other valuable metals, such as Mo, V and U. New hydrometallurgical technologies are more economical and open up new possibilities for metal recovery. This work presents results of the study on the extraction of copper by mixed extractant consisting p -toluidine dissolved in toluene. The possibility of simultaneous liquid-liquid extraction of molybdenum and vanadium was examined. D2EHPA solutions was used as extractant, and recovery of individual elements compared for the representative samples of ore and copper flotation tailings. Radiometric methods were applied for process optimization.
A review: aluminum nitride MEMS contour-mode resonator
NASA Astrophysics Data System (ADS)
Yunhong, Hou; Meng, Zhang; Guowei, Han; Chaowei, Si; Yongmei, Zhao; Jin, Ning
2016-10-01
Over the past several decades, the technology of micro-electromechanical system (MEMS) has advanced. A clear need of miniaturization and integration of electronics components has had new solutions for the next generation of wireless communications. The aluminum nitride (AlN) MEMS contour-mode resonator (CMR) has emerged and become promising and competitive due to the advantages of the small size, high quality factor and frequency, low resistance, compatibility with integrated circuit (IC) technology, and the ability of integrating multi-frequency devices on a single chip. In this article, a comprehensive review of AlN MEMS CMR technology will be presented, including its basic working principle, main structures, fabrication processes, and methods of performance optimization. Among these, the deposition and etching process of the AlN film will be specially emphasized and recent advances in various performance optimization methods of the CMR will be given through specific examples which are mainly focused on temperature compensation and reducing anchor losses. This review will conclude with an assessment of the challenges and future trends of the CMR. Project supported by National Natural Science Foundation (Nos. 61274001, 61234007, 61504130), the Nurturing and Development Special Projects of Beijing Science and Technology Innovation Base's Financial Support (No. Z131103002813070), and the National Defense Science and Technology Innovation Fund of CAS (No. CXJJ-14-M32).
Formation of curcumin nanoparticles via solution-enhanced dispersion by supercritical CO2
Zhao, Zheng; Xie, Maobin; Li, Yi; Chen, Aizheng; Li, Gang; Zhang, Jing; Hu, Huawen; Wang, Xinyu; Li, Shipu
2015-01-01
In order to enhance the bioavailability of poorly water-soluble curcumin, solution-enhanced dispersion by supercritical carbon dioxide (CO2) (SEDS) was employed to prepare curcumin nanoparticles for the first time. A 24 full factorial experiment was designed to determine optimal processing parameters and their influence on the size of the curcumin nanoparticles. Particle size was demonstrated to increase with increased temperature or flow rate of the solution, or with decreased precipitation pressure, under processing conditions with different parameters considered. The single effect of the concentration of the solution on particle size was not significant. Curcumin nanoparticles with a spherical shape and the smallest mean particle size of 325 nm were obtained when the following optimal processing conditions were adopted: P =20 MPa, T =35°C, flow rate of solution =0.5 mL·min−1, concentration of solution =0.5%. Fourier transform infrared (FTIR) spectroscopy measurement revealed that the chemical composition of curcumin basically remained unchanged. Nevertheless, X-ray powder diffraction (XRPD) and thermal analysis indicated that the crystalline state of the original curcumin decreased after the SEDS process. The solubility and dissolution rate of the curcumin nanoparticles were found to be higher than that of the original curcumin powder (approximately 1.4 μg/mL vs 0.2 μg/mL in 180 minutes). This study revealed that supercritical CO2 technologies had a great potential in fabricating nanoparticles and improving the bioavailability of poorly water-soluble drugs. PMID:25995627
Formation of curcumin nanoparticles via solution-enhanced dispersion by supercritical CO2.
Zhao, Zheng; Xie, Maobin; Li, Yi; Chen, Aizheng; Li, Gang; Zhang, Jing; Hu, Huawen; Wang, Xinyu; Li, Shipu
2015-01-01
In order to enhance the bioavailability of poorly water-soluble curcumin, solution-enhanced dispersion by supercritical carbon dioxide (CO2) (SEDS) was employed to prepare curcumin nanoparticles for the first time. A 2(4) full factorial experiment was designed to determine optimal processing parameters and their influence on the size of the curcumin nanoparticles. Particle size was demonstrated to increase with increased temperature or flow rate of the solution, or with decreased precipitation pressure, under processing conditions with different parameters considered. The single effect of the concentration of the solution on particle size was not significant. Curcumin nanoparticles with a spherical shape and the smallest mean particle size of 325 nm were obtained when the following optimal processing conditions were adopted: P = 20 MPa, T = 35°C, flow rate of solution = 0.5 mL·min(-1), concentration of solution = 0.5%. Fourier transform infrared (FTIR) spectroscopy measurement revealed that the chemical composition of curcumin basically remained unchanged. Nevertheless, X-ray powder diffraction (XRPD) and thermal analysis indicated that the crystalline state of the original curcumin decreased after the SEDS process. The solubility and dissolution rate of the curcumin nanoparticles were found to be higher than that of the original curcumin powder (approximately 1.4 μg/mL vs 0.2 μg/mL in 180 minutes). This study revealed that supercritical CO2 technologies had a great potential in fabricating nanoparticles and improving the bioavailability of poorly water-soluble drugs.
Kucuker, Mehmet Ali; Wieczorek, Nils; Kuchta, Kerstin; Copty, Nadim K.
2017-01-01
In recent years, biosorption is being considered as an environmental friendly technology for the recovery of rare earth metals (REE). This study investigates the optimal conditions for the biosorption of neodymium (Nd) from an aqueous solution derived from hard drive disk magnets using green microalgae (Chlorella vulgaris). The parameters considered include solution pH, temperature and biosorbent dosage. Best-fit equilibrium as well as kinetic biosorption models were also developed. At the optimal pH of 5, the maximum experimental Nd uptakes at 21, 35 and 50°C and an initial Nd concentration of 250 mg/L were 126.13, 157.40 and 77.10 mg/g, respectively. Analysis of the optimal equilibrium sorption data showed that the data fitted well (R2 = 0.98) to the Langmuir isotherm model, with maximum monolayer coverage capacity (qmax) of 188.68 mg/g, and Langmuir isotherm constant (KL) of 0.029 L/mg. The corresponding separation factor (RL) is 0.12 indicating that the equilibrium sorption was favorable. The sorption kinetics of Nd ion follows well a pseudo-second order model (R2>0.99), even at low initial concentrations. These results show that Chlorella vulgaris has greater biosorption affinity for Nd than activated carbon and other algae types such as: A. Gracilis, Sargassum sp. and A. Densus. PMID:28388641
Library-based illumination synthesis for critical CMOS patterning.
Yu, Jue-Chin; Yu, Peichen; Chao, Hsueh-Yung
2013-07-01
In optical microlithography, the illumination source for critical complementary metal-oxide-semiconductor layers needs to be determined in the early stage of a technology node with very limited design information, leading to simple binary shapes. Recently, the availability of freeform sources permits us to increase pattern fidelity and relax mask complexities with minimal insertion risks to the current manufacturing flow. However, source optimization across many patterns is often treated as a design-of-experiments problem, which may not fully exploit the benefits of a freeform source. In this paper, a rigorous source-optimization algorithm is presented via linear superposition of optimal sources for pre-selected patterns. We show that analytical solutions are made possible by using Hopkins formulation and quadratic programming. The algorithm allows synthesized illumination to be linked with assorted pattern libraries, which has a direct impact on design rule studies for early planning and design automation for full wafer optimization.
Pricing Resources in LTE Networks through Multiobjective Optimization
Lai, Yung-Liang; Jiang, Jehn-Ruey
2014-01-01
The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid “user churn,” which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889
Pricing resources in LTE networks through multiobjective optimization.
Lai, Yung-Liang; Jiang, Jehn-Ruey
2014-01-01
The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution.
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2017-10-01
The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.
Algorithm of composing the schedule of construction and installation works
NASA Astrophysics Data System (ADS)
Nehaj, Rustam; Molotkov, Georgij; Rudchenko, Ivan; Grinev, Anatolij; Sekisov, Aleksandr
2017-10-01
An algorithm for scheduling works is developed, in which the priority of the work corresponds to the total weight of the subordinate works, the vertices of the graph, and it is proved that for graphs of the tree type the algorithm is optimal. An algorithm is synthesized to reduce the search for solutions when drawing up schedules of construction and installation works, allocating a subset with the optimal solution of the problem of the minimum power, which is determined by the structure of its initial data and numerical values. An algorithm for scheduling construction and installation work is developed, taking into account the schedule for the movement of brigades, which is characterized by the possibility to efficiently calculate the values of minimizing the time of work performance by the parameters of organizational and technological reliability through the use of the branch and boundary method. The program of the computational algorithm was compiled in the MatLAB-2008 program. For the initial data of the matrix, random numbers were taken, uniformly distributed in the range from 1 to 100. It takes 0.5; 2.5; 7.5; 27 minutes to solve the problem. Thus, the proposed method for estimating the lower boundary of the solution is sufficiently accurate and allows efficient solution of the minimax task of scheduling construction and installation works.
Mechanization for Optimal Landscape Reclamation
NASA Astrophysics Data System (ADS)
Vondráčková, Terezie; Voštová, Věra; Kraus, Michal
2017-12-01
Reclamation is a method of ultimate utilization of land adversely affected by mining or other industrial activity. The paper explains the types of reclamation and the term “optimal reclamation”. Technological options of the long-lasting process of mine dumps reclamation starting with the removal of overlying rocks, transport and backfilling up to the follow-up remodelling of the mine dumps terrain. Technological units and equipment for stripping flow division. Stripping flow solution with respect to optimal reclamation. We recommend that the application of logistic chains and mining simulation with follow-up reclamation to open-pit mines be used for the implementation of optimal reclamation. In addition to a database of local heterogeneities of the stripped soil and reclaimed land, the flow of earths should be resolved in a manner allowing the most suitable soil substrate to be created for the restoration of agricultural and forest land on mine dumps. The methodology under development for the solution of a number of problems, including the geological survey of overlying rocks, extraction of stripping, their transport and backfilling in specified locations with the follow-up deployment of goal-directed reclamation. It will make possible to reduce the financial resources needed for the complex process chain by utilizing GIS, GPS and DGPS technologies, logistic tools and synergistic effects. When selecting machines for transport, moving and spreading of earths, various points of view and aspects must be taken into account. Among such aspects are e.g. the kind of earth to be operated by the respective construction machine, the kind of work activities to be performed, the machine’s capacity, the option to control the machine’s implement and economic aspects and clients’ requirements. All these points of view must be considered in the decision-making process so that the selected machine is capable of executing the required activity and that the use of an unsuitable machine is eliminated as it would result in a delay and increase in the project costs. Therefore, reclamation always includes extensive earth-moving work activities restoring the required relief of the land being reclaimed. Using the earth-moving machine capacity, the kind of soil in mine dumps, the kind of the work activity performed and the machine design, a SW application has been developed that allows the most suitable machine for the respective work technology to be selected with a view to preparing the land intended for reclamation.
Liu, Jianguo; Yang, Bo; Chen, Changzhen
2013-02-01
The optimization of operating parameters for the isolation of peroxidase from horseradish (Armoracia rusticana) roots with ultrafiltration (UF) technology was systemically studied. The effects of UF operating conditions on the transmission of proteins were quantified using the parameter scanning UF. These conditions included solution pH, ionic strength, stirring speed and permeate flux. Under optimized conditions, the purity of horseradish peroxidase (HRP) obtained was greater than 84 % after a two-stage UF process and the recovery of HRP from the feedstock was close to 90 %. The resulting peroxidase product was then analysed by isoelectric focusing, SDS-PAGE and circular dichroism, to confirm its isoelectric point, molecular weight and molecular secondary structure. The effects of calcium ion on HRP specific activities were also experimentally determined.
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Optimization of biodiesel supply chains based on small farmers: a case study in Brazil.
Leão, Raphael Riemke de Campos Cesar; Hamacher, Silvio; Oliveira, Fabrício
2011-10-01
This article presents a methodology for conceiving and planning the development of an optimized supply chain of a biodiesel plant sourced from family farms and taking into consideration agricultural, logistic, industrial, and social aspects. This model was successfully applied to the production chain of biodiesel fuel from castor oil in the semi-arid region of Brazil. Results suggest important insights related to the optimal configuration of the crushing units, regarding its location, technology, and when it should be available, as well as the configuration of the production zones along the planning horizon considered. Moreover, a sensitivity analysis is performed in order to measure how possible variations in the considered conjecture can affect the robustness of the solutions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Advancement of X-Ray Microscopy Technology and its Application to Metal Solidification Studies
NASA Technical Reports Server (NTRS)
Kaukler, William F.; Curreri, Peter A.
1996-01-01
The technique of x-ray projection microscopy is being used to view, in real time, the structures and dynamics of the solid-liquid interface during solidification. By employing a hard x-ray source with sub-micron dimensions, resolutions of 2 micrometers can be obtained with magnifications of over 800 X. Specimen growth conditions need to be optimized and the best imaging technologies applied to maintain x-ray image resolution, contrast and sensitivity. It turns out that no single imaging technology offers the best solution and traditional methods like radiographic film cannot be used due to specimen motion (solidification). In addition, a special furnace design is required to permit controlled growth conditions and still offer maximum resolution and image contrast.
Fuel consumption optimization for smart hybrid electric vehicle during a car-following process
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Xiangyu; Song, Jian
2017-03-01
Hybrid electric vehicles (HEVs) provide large potential to save energy and reduce emission, and smart vehicles bring out great convenience and safety for drivers. By combining these two technologies, vehicles may achieve excellent performances in terms of dynamic, economy, environmental friendliness, safety, and comfort. Hence, a smart hybrid electric vehicle (s-HEV) is selected as a platform in this paper to study a car-following process with optimizing the fuel consumption. The whole process is a multi-objective optimal problem, whose optimal solution is not just adding an energy management strategy (EMS) to an adaptive cruise control (ACC), but a deep fusion of these two methods. The problem has more restricted conditions, optimal objectives, and system states, which may result in larger computing burden. Therefore, a novel fuel consumption optimization algorithm based on model predictive control (MPC) is proposed and some search skills are adopted in receding horizon optimization to reduce computing burden. Simulations are carried out and the results indicate that the fuel consumption of proposed method is lower than that of the ACC+EMS method on the condition of ensuring car-following performances.
NASA Technical Reports Server (NTRS)
Rash, James L.
2010-01-01
NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.
Engineering solutions for polymer composites solar water heaters production
NASA Astrophysics Data System (ADS)
Frid, S. E.; Arsatov, A. V.; Oshchepkov, M. Yu.
2016-06-01
Analysis of engineering solutions aimed at a considerable decrease of solar water heaters cost via the use of polymer composites in heaters construction and solar collector and heat storage integration into a single device representing an integrated unit results are considered. Possibilities of creating solar water heaters of only three components and changing welding, soldering, mechanical treatment, and assembly of a complicate construction for large components molding of polymer composites and their gluing are demonstrated. Materials of unit components and engineering solutions for their manufacturing are analyzed with consideration for construction requirements of solar water heaters. Optimal materials are fiber glass and carbon-filled plastics based on hot-cure thermosets, and an optimal molding technology is hot molding. It is necessary to manufacture the absorbing panel as corrugated and to use a special paint as its selective coating. Parameters of the unit have been optimized by calculation. Developed two-dimensional numerical model of the unit demonstrates good agreement with the experiment. Optimal ratio of daily load to receiving surface area of a solar water heater operating on a clear summer day in the midland of Russia is 130‒150 L/m2. Storage tank volume and load schedule have a slight effect on solar water heater output. A thermal insulation layer of 35‒40 mm is sufficient to provide an efficient thermal insulation of the back and side walls. An experimental model layout representing a solar water heater prototype of a prime cost of 70‒90/(m2 receiving surface) has been developed for a manufacturing volume of no less than 5000 pieces per year.
NASA Astrophysics Data System (ADS)
Kunsági-Máté, Sándor; Ortmann, Erika; Kollár, László; Nikfardjam, Martin Pour
2008-09-01
The complex formation of malvidin-3- O-glucoside with several polyphenols, the so-called "copigmentation" phenomenon, was studied in aqueous solutions. To simulate the copigmentation process during fermentation, the stability of the formed complexes was examined in dependence of the ethanol content of the aqueous solution. Results indicate that stronger and larger complexes are formed, when the ethanol content exceeds a critical margin of 8 vol.% However, the size of complexes of malvidin/procyanidin and malvidin/epicatechin is drastically reduced above this critical concentration. Fluorescence lifetime and solvent relaxation measurements give insight into the particular processes at molecular level and will help us comprehend the first important steps during winemaking in order to recommend an optimized winemaking technology to ensure extraordinary colour stability in red wines.
Opera: reconstructing optimal genomic scaffolds with high-throughput paired-end sequences.
Gao, Song; Sung, Wing-Kin; Nagarajan, Niranjan
2011-11-01
Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/ ).
Opera: Reconstructing Optimal Genomic Scaffolds with High-Throughput Paired-End Sequences
Gao, Song; Sung, Wing-Kin
2011-01-01
Abstract Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/). PMID:21929371
Flexibility Support for Homecare Applications Based on Models and Multi-Agent Technology
Armentia, Aintzane; Gangoiti, Unai; Priego, Rafael; Estévez, Elisabet; Marcos, Marga
2015-01-01
In developed countries, public health systems are under pressure due to the increasing percentage of population over 65. In this context, homecare based on ambient intelligence technology seems to be a suitable solution to allow elderly people to continue to enjoy the comforts of home and help optimize medical resources. Thus, current technological developments make it possible to build complex homecare applications that demand, among others, flexibility mechanisms for being able to evolve as context does (adaptability), as well as avoiding service disruptions in the case of node failure (availability). The solution proposed in this paper copes with these flexibility requirements through the whole life-cycle of the target applications: from design phase to runtime. The proposed domain modeling approach allows medical staff to design customized applications, taking into account the adaptability needs. It also guides software developers during system implementation. The application execution is managed by a multi-agent based middleware, making it possible to meet adaptation requirements, assuring at the same time the availability of the system even for stateful applications. PMID:26694416
NASA Astrophysics Data System (ADS)
Maduro, Miguelangel
The shift to strong hybrid and electrified vehicle architectures engenders controversy and brings about many unanswered questions. It is unclear whether developed markets will have the infrastructure in place to support and successfully implement them. To date, limited effort has been made to comprehend if the energy and transportation solutions that work well for one city or geographic region may extend broadly. A region's capacity to supply a fleet of EVs, or plug-in hybrid vehicles with the required charging infrastructure, does not necessarily make such vehicle architectures an optimal solution. In this study, a mix of technologies ranging from HEV to PHEV and EREV through to Battery Electric Vehicles were analyzed and set in three Canadian Provinces and 3 U.S. Regions for the year 2020. Government agency developed environmental software tools were used to estimate greenhouse gas emissions and energy use. Projected vehicle technology shares were employed to estimate regional environmental implications. Alternative vehicle technologies and fuels are recommended for each region based on local power generation schemes.
Dynamic beam steering at submm- and mm-wave frequencies using an optically controlled lens antenna
NASA Astrophysics Data System (ADS)
Gallacher, T. F.; Søndenâ, R.; Robertson, D. A.; Smith, G. M.
2013-05-01
We present details of our work which has been focused on improving the efficiency and scan rate of the photo-injected Fresnel zone plate antenna (piFZPA) technique which utilizes commercially available visible display technologies. This approach presents a viable low-cost solution for non-mechanical beam steering, suitable for many applications at (sub) mm-wave frequencies that require rapid beam steering capabilities in order to meet their technological goals, such as imaging, surveillance, and remote sensing. This method has the advantage of being comparatively low-cost, is based on a simple and flexible architecture, enabling rapid and precise arbitrary beam forming, and which is scalable to higher frame-rates and higher submm-wave frequencies. We discuss the various optimization stages of a range of piFZPA designs that implement fast visible projection displays, enabling up to 30,000 beams per second. We also outline the suitability of this technology across mm-wave and submm-wave frequencies as a low-cost and simple solution for dynamic optoelectronic beam steering.
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.
1992-01-01
The presentation gives a partial overview of research and development underway in the Structures Division of LeRC, which collectively is referred to as the Computational Structures Technology Program. The activities in the program are diverse and encompass four major categories: (1) composite materials and structures; (2) probabilistic analysis and reliability; (3) design optimization and expert systems; and (4) computational methods and simulation. The approach of the program is comprehensive and entails exploration of fundamental theories of structural mechanics to accurately represent the complex physics governing engine structural performance, formulation, and implementation of computational techniques and integrated simulation strategies to provide accurate and efficient solutions of the governing theoretical models by exploiting the emerging advances in computer technology, and validation and verification through numerical and experimental tests to establish confidence and define the qualities and limitations of the resulting theoretical models and computational solutions. The program comprises both in-house and sponsored research activities. The remainder of the presentation provides a sample of activities to illustrate the breadth and depth of the program and to demonstrate the accomplishments and benefits that have resulted.
Sustainable Architecture in the Context of Regional Activities
NASA Astrophysics Data System (ADS)
Sołkeiewicz-Kos, Nina
2017-10-01
The relationship between man and the surrounding cultural environment directs attention in urban and architectural design to the realm of interdisciplinary research. As a result, they should create architectural and urban solutions which provide aesthetic satisfaction. They should also generate social bonds, a sense of identity and maintain the specificity of the local building environment, where tradition and the context of surroundings is the starting point for creating a sustainable living environment. Presented problems focus on the analysis of formal, functional and spatial solutions, in which materials and technology were selected in an optimal way. The continuation of the subject concerns the relationship between the use of the local urban, architectural, material and technological solutions and the quality of the cultural space that meets the principles of sustainable development. Adaptation and transformation of old techniques and traditional materials to create contemporary designs is one of the forms of experimentation encountered in contemporary architecture. Its economic, social and ecological aspects are realised in the form of: satisfying the needs of the local community, renewal and maintenance of modern standards of the surrounding buildings, use of local materials and available space. This means striving to design and transform the space already in use, while reducing the impact on the environment. Analysed buildings and urban spaces are an attempt to answer: whether the strategies applied in the field of architectural, technological and material solutions provide the identification of the place and meet the users’ expectations?
Eckermann, Simon; Willan, Andrew R
2013-05-01
Risk sharing arrangements relate to adjusting payments for new health technologies given evidence of their performance over time. Such arrangements rely on prospective information regarding the incremental net benefit of the new technology, and its use in practice. However, once the new technology has been adopted in a particular jurisdiction, randomized clinical trials within that jurisdiction are likely to be infeasible and unethical in the cases where they would be most helpful, i.e. with current evidence of positive while uncertain incremental health and net monetary benefit. Informed patients in these cases would likely be reluctant to participate in a trial, preferring instead to receive the new technology with certainty. Consequently, informing risk sharing arrangements within a jurisdiction is problematic given the infeasibility of collecting prospective trial data. To overcome such problems, we demonstrate that global trials facilitate trialling post adoption, leading to more complete and robust risk sharing arrangements that mitigate the impact of costs of reversal on expected value of information in jurisdictions who adopt while a global trial is undertaken. More generally, optimally designed global trials offer distinct advantages over locally optimal solutions for decision makers and manufacturers alike: avoiding opportunity costs of delay in jurisdictions that adopt; overcoming barriers to evidence collection; and improving levels of expected implementation. Further, the greater strength and translatability of evidence across jurisdictions inherent in optimal global trial design reduces barriers to translation across jurisdictions characteristic of local trials. Consequently, efficiently designed global trials better align the interests of decision makers and manufacturers, increasing the feasibility of risk sharing and the expected strength of evidence over local trials, up until the point that current evidence is globally sufficient.
Structures, Material and Processes Technology in the Future Launchers Preparatory Program
NASA Astrophysics Data System (ADS)
Baiocco, P.; Ramusat, G.; Breteau, J.; Bouilly, Th.; Lavelle, Fl.; Cardone, T.; Fischer, H.; Appel, S.; Block, U.
2014-06-01
In the frame of the technology / demonstration activity for European launchers developments and evolutions, a top-down / bottom-up approach has been employed to identify promising technologies and alternative conception. The top-down approach consists in looking for system-driven design solutions and the bottom-up approach features design solutions leading to substantial advantages for the system. The main investigations have been devoted to structures, material and process technology.Preliminary specifications have been used in order to permit sub-system design with the goal to find the major benefit for the overall launch system. In this respect competitiveness factors have been defined to down- select the technology and the corresponding optimized design. The development cost, non-recurring cost, industrialization and operational aspects have been considered for the identification of the most interesting solutions. The TRL/IRL has been assessed depending on the manufacturing company and a preliminary development plan has been issued for some technology.The reference launch systems for the technology and demonstration programs are mainly Ariane 6 with its evolutions, VEGA C/E and others possible longer term systems. Requirements and reference structures architectures have been considered in order to state requirements for representative subscale or full scale demonstrators. The major sub-systems and structures analyzed are for instance the upper stage structures, the engine thrust frame (ETF), the inter stage structures (ISS), the cryogenic propellant tanks, the feeding lines and their attachments, the pressurization systems, the payload adapters and fairings. A specific analysis has been devoted to the efficiency of production processes associated to technologies and design features.The paper provides an overview of the main results of the technology and demonstration activities with the associated system benefits. The materials used for the main structures are metallic and composite owing to sub-systems or sub-assemblies proposed for the European launch systems in development and their evolutions.
Optimization of A(2)O BNR processes using ASM and EAWAG Bio-P models: model performance.
El Shorbagy, Walid E; Radif, Nawras N; Droste, Ronald L
2013-12-01
This paper presents the performance of an optimization model for a biological nutrient removal (BNR) system using the anaerobic-anoxic-oxic (A(2)O) process. The formulated model simulates removal of organics, nitrogen, and phosphorus using a reduced International Water Association (IWA) Activated Sludge Model #3 (ASM3) model and a Swiss Federal Institute for Environmental Science and Technology (EAWAG) Bio-P module. Optimal sizing is attained considering capital and operational costs. Process performance is evaluated against the effect of influent conditions, effluent limits, and selected parameters of various optimal solutions with the following results: an increase of influent temperature from 10 degrees C to 25 degrees C decreases the annual cost by about 8.5%, an increase of influent flow from 500 to 2500 m(3)/h triples the annual cost, the A(2)O BNR system is more sensitive to variations in influent ammonia than phosphorus concentration and the maximum growth rate of autotrophic biomass was the most sensitive kinetic parameter in the optimization model.
Fuel optimal maneuvers for spacecraft with fixed thrusters
NASA Technical Reports Server (NTRS)
Carter, T. C.
1982-01-01
Several mathematical models, including a minimum integral square criterion problem, were used for the qualitative investigation of fuel optimal maneuvers for spacecraft with fixed thrusters. The solutions consist of intervals of "full thrust" and "coast" indicating that thrusters do not need to be designed as "throttleable" for fuel optimal performance. For the primary model considered, singular solutions occur only if the optimal solution is "pure translation". "Time optimal" singular solutions can be found which consist of intervals of "coast" and "full thrust". The shape of the optimal fuel consumption curve as a function of flight time was found to depend on whether or not the initial state is in the region admitting singular solutions. Comparisons of fuel optimal maneuvers in deep space with those relative to a point in circular orbit indicate that qualitative differences in the solutions can occur. Computation of fuel consumption for certain "pure translation" cases indicates that considerable savings in fuel can result from the fuel optimal maneuvers.
Benzotriazole removal on post-Cu CMP cleaning
NASA Astrophysics Data System (ADS)
Jiying, Tang; Yuling, Liu; Ming, Sun; Shiyan, Fan; Yan, Li
2015-06-01
This work investigates systematically the effect of FA/O II chelating agent and FA/O I surfactant in alkaline cleaning solutions on benzotriazole (BTA) removal during post-Cu CMP cleaning in GLSI under the condition of static etching. The best detergent formulation for BTA removal can be determined by optimization of the experiments of single factor and compound cleaning solution, which has been further confirmed experimentally by contact angle (CA) measurements. The resulting solution with the best formulation has been measured for the actual production line, and the results demonstrate that the obtained cleaning solution can effectively and efficiently remove BTA, CuO and abrasive SiO2 without basically causing interfacial corrosion. This work demonstrates the possibility of developing a simple, low-cost and environmentally-friendly cleaning solution to effectively solve the issues of BTA removal on post-Cu CMP cleaning in a multi-layered copper wafer. Project supported by the Major National Science and Technology Special Projects (No. 2009ZX02308).
Long-Term Planning for Nuclear Energy Systems Under Deep Uncertainty
NASA Astrophysics Data System (ADS)
Kim, Lance Kyungwoo
Long-term planning for nuclear energy systems has been an area of interest for policy planners and systems designers to assess and manage the complexity of the system and the long-term, wide-ranging societal impacts of decisions. However, traditional planning tools are often poorly equipped to cope with the deep parametric, structural, and value uncertainties in long-term planning. A more robust, multiobjective decision-making method is applied to a model of the nuclear fuel cycle to address the many sources of complexity, uncertainty, and ambiguity inherent to long-term planning. Unlike prior studies that rely on assessing the outcomes of a limited set of deployment strategies, solutions in this study arise from optimizing behavior against multiple incommensurable objectives, utilizing goal-seeking multiobjective evolutionary algorithms to identify minimax regret solutions across various demand scenarios. By excluding inferior and infeasible solutions, the choice between the Pareto optimal solutions depends on a decision-maker's preferences for the defined outcomes---limiting analyst bias and increasing transparency. Though simplified by the necessity of reducing computational burdens, the nuclear fuel cycle model captures important phenomena governing the behavior of the nuclear energy system relevant to the decision to close the fuel cycle---incorporating reactor population dynamics, material stocks and flows, constraints on material flows, and outcomes of interest to decision-makers. Technology neutral performance criteria are defined consistent with the Generation IV International Forum goals of improved security and proliferation resistance based on structural features of the nuclear fuel cycle, natural resource sustainability, and waste production. A review of safety risks and the economic history of the development of nuclear technology suggests that safety and economic criteria may not be decisive criteria as the safety risks posed by alternative fuel cycles may be comparable in aggregate and economic performance is uncertain and path dependent. Technology strategies impacting reactor lifetimes and advanced reactor introduction dates are evaluated against a high, medium, and phaseout scenarios of nuclear energy demand. Non-dominated, minimax regret solutions are found with the NSGA-II multiobjective evolutionary algorithm. Results suggest that more aggressive technology strategies featuring the early introduction of breeder and burner reactors, possibly combined with lifetime extension of once-through systems, tend to dominate less aggressive strategies under more demanding growth scenarios over the next century. Less aggressive technology strategies that delay burning and breeding tend to be clustered in the minimax regret space, suggesting greater sensitivity to shifts in preferences. Lifetime extension strategies can unexpectedly result in fewer deployments of once-through systems, permitting the growth of advanced systems to meet demand. Both breeders and burners are important for controlling plutonium inventories with breeders achieving lower inventories in storage by locking material in reactor cores while burners can reduce the total inventory in the system. Other observations include the indirect impacts of some performance measures, the relatively small impact of technology strategies on the waste properties of all material in the system, and the difficulty of phasing out nuclear energy while meeting all objectives with the specified technology options.
Design Optimization of a Variable-Speed Power Turbine
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.; Jones, Scott M.; Gray, Justin S.
2014-01-01
NASA's Rotary Wing Project is investigating technologies that will enable the development of revolutionary civil tilt rotor aircraft. Previous studies have shown that for large tilt rotor aircraft to be viable, the rotor speeds need to be slowed significantly during the cruise portion of the flight. This requirement to slow the rotors during cruise presents an interesting challenge to the propulsion system designer as efficient engine performance must be achieved at two drastically different operating conditions. One potential solution to this challenge is to use a transmission with multiple gear ratios and shift to the appropriate ratio during flight. This solution will require a large transmission that is likely to be maintenance intensive and will require a complex shifting procedure to maintain power to the rotors at all times. An alternative solution is to use a fixed gear ratio transmission and require the power turbine to operate efficiently over the entire speed range. This concept is referred to as a variable-speed power-turbine (VSPT) and is the focus of the current study. This paper explores the design of a variable speed power turbine for civil tilt rotor applications using design optimization techniques applied to NASA's new meanline tool, the Object-Oriented Turbomachinery Analysis Code (OTAC).
Nonreciprocal Signal Routing in an Active Quantum Network
NASA Astrophysics Data System (ADS)
Tureci, Hakan E.; Metelmann, Anja
As superconductor quantum technologies are moving towards large-scale integrated circuits, a robust and flexible approach to routing photons at the quantum level becomes a critical problem. Active circuits, which contain driven linear or non-linear elements judiciously embedded in the circuit offer a viable solution. We present a general strategy for routing non-reciprocally quantum signals between two sites of a given lattice of resonators, implementable with existing superconducting circuit components. Our approach makes use of a dual lattice of superconducting non-linear elements on the links connecting the nodes of the main lattice. Solutions for spatially selective driving of the link-elements can be found, which optimally balance coherent and dissipative hopping of microwave photons to non-reciprocally route signals between two given nodes. In certain lattices these optimal solutions are obtained at the exceptional point of the scattering matrix of the network. The presented strategy provides a design space that is governed by a dynamically tunable non-Hermitian generator that can be used to minimize the added quantum noise as well. This work was supported by the U.S. Army Research Office (ARO) under Grant No. W911NF-15-1-0299.
NASA Technical Reports Server (NTRS)
Pavarini, C.
1974-01-01
Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.
A building-block approach to 3D printing a multichannel, organ-regenerative scaffold.
Wang, Xiaohong; Rijff, Boaz Lloyd; Khang, Gilson
2017-05-01
Multichannel scaffolds, formed by rapid prototyping technologies, retain a high potential for regenerative medicine and the manufacture of complex organs. This study aims to optimize several parameters for producing poly(lactic-co-glycolic acid) (PLGA) scaffolds by a low-temperature, deposition manufacturing, three-dimensional printing (3DP, or rapid prototyping) system. Concentration of the synthetic polymer solution, nozzle speed and extrusion rate were analysed and discussed. Polymer solution with a concentration of 12% w/v was determined as optimal for formation; large deviation of this figure failed to maintain the desired structure. The extrusion rate was also modified for better construct quality. Finally, several solid organ scaffolds, such as the liver, with proper wall thickness and intact contour were printed. This study gives basic instruction to design and fabricate scaffolds with de novo material systems, particularly by showing the approximation of variables for manufacturing multichannel PLGA scaffolds. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Crystallization in lactose refining-a review.
Wong, Shin Yee; Hartel, Richard W
2014-03-01
In the dairy industry, crystallization is an important separation process used in the refining of lactose from whey solutions. In the refining operation, lactose crystals are separated from the whey solution through nucleation, growth, and/or aggregation. The rate of crystallization is determined by the combined effect of crystallizer design, processing parameters, and impurities on the kinetics of the process. This review summarizes studies on lactose crystallization, including the mechanism, theory of crystallization, and the impact of various factors affecting the crystallization kinetics. In addition, an overview of the industrial crystallization operation highlights the problems faced by the lactose manufacturer. The approaches that are beneficial to the lactose manufacturer for process optimization or improvement are summarized in this review. Over the years, much knowledge has been acquired through extensive research. However, the industrial crystallization process is still far from optimized. Therefore, future effort should focus on transferring the new knowledge and technology to the dairy industry. © 2014 Institute of Food Technologists®
A Full-Featured User Friendly CO 2-EOR and Sequestration Planning Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, Bill
A Full-Featured, User Friendly CO 2-EOR and Sequestration Planning Software This project addressed the development of an integrated software solution that includes a graphical user interface, numerical simulation, visualization tools and optimization processes for reservoir simulation modeling of CO 2-EOR. The objective was to assist the industry in the development of domestic energy resources by expanding the application of CO 2-EOR technologies, and ultimately to maximize the CO 2} sequestration capacity of the U.S. The software resulted in a field-ready application for the industry to address the current CO 2-EOR technologies. The software has been made available to the publicmore » without restrictions and with user friendly operating documentation and tutorials. The software (executable only) can be downloaded from NITEC’s website at www.nitecllc.com. This integrated solution enables the design, optimization and operation of CO 2-EOR processes for small and mid-sized operators, who currently cannot afford the expensive, time intensive solutions that the major oil companies enjoy. Based on one estimate, small oil fields comprise 30% of the of total economic resource potential for the application of CO 2-EOR processes in the U.S. This corresponds to 21.7 billion barrels of incremental, technically recoverable oil using the current “best practices”, and 31.9 billion barrels using “next-generation” CO 2-EOR techniques. The project included a Case Study of a prospective CO 2-EOR candidate field in Wyoming by a small independent, Linc Energy Petroleum Wyoming, Inc. NITEC LLC has an established track record of developing innovative and user friendly software. The Principle Investigator is an experienced manager and engineer with expertise in software development, numerical techniques, and GUI applications. Unique, presently-proprietary NITEC technologies have been integrated into this application to further its ease of use and technical functionality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dando, Neal; Gershenzon, Mike; Ghosh, Rajat
2012-07-31
The overall goal of this DOE Phase 2 project was to further develop and conduct pilot-scale and field testing of a biomimetic in-duct scrubbing system for the capture of gaseous CO 2 coupled with sequestration of captured carbon by carbonation of alkaline industrial wastes. The Phase 2 project, reported on here, combined efforts in enzyme development, scrubber optimization, and sequestrant evaluations to perform an economic feasibility study of technology deployment. The optimization of carbonic anhydrase (CA) enzyme reactivity and stability are critical steps in deployment of this technology. A variety of CA enzyme variants were evaluated for reactivity and stabilitymore » in both bench scale and in laboratory pilot scale testing to determine current limits in enzyme performance. Optimization of scrubber design allowed for improved process economics while maintaining desired capture efficiencies. A range of configurations, materials, and operating conditions were examined at the Alcoa Technical Center on a pilot scale scrubber. This work indicated that a cross current flow utilizing a specialized gas-liquid contactor offered the lowest system operating energy. Various industrial waste materials were evaluated as sources of alkalinity for the scrubber feed solution and as sources of calcium for precipitation of carbonate. Solids were mixed with a simulated sodium bicarbonate scrubber blowdown to comparatively examine reactivity. Supernatant solutions and post-test solids were analyzed to quantify and model the sequestration reactions. The best performing solids were found to sequester between 2.3 and 2.9 moles of CO 2 per kg of dry solid in 1-4 hours of reaction time. These best performing solids were cement kiln dust, circulating dry scrubber ash, and spray dryer absorber ash. A techno-economic analysis was performed to evaluate the commercial viability of the proposed carbon capture and sequestration process in full-scale at an aluminum smelter and a refinery location. For both cases the in-duct scrubber technology was compared to traditional amine- based capture. Incorporation of the laboratory results showed that for the application at the aluminum smelter, the in-duct scrubber system is more economical than traditional methods. However, the reverse is true for the refinery case, where the bauxite residue is not effective enough as a sequestrant, combined with challenges related to contaminants in the bauxite residue accumulating in and fouling the scrubber absorbent. Sensitivity analyses showed that the critical variables by which process economics could be improved are enzyme concentration, efficiency, and half-life. At the end of the first part of the Phase 2 project, a gate review (DOE Decision Zero Gate Point) was conducted to decide on the next stages of the project. The original plan was to follow the pre-testing phase with a detailed design for the field testing. Unfavorable process economics, however, resulted in a decision to conclude the project before moving to field testing. It is noted that CO 2 Solutions proposed an initial solution to reduce process costs through more advanced enzyme management, however, DOE program requirements restricting any technology development extending beyond 2014 as commercial deployment timeline did not allow this solution to be undertaken.« less
Singular-Arc Time-Optimal Trajectory of Aircraft in Two-Dimensional Wind Field
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2006-01-01
This paper presents a study of a minimum time-to-climb trajectory analysis for aircraft flying in a two-dimensional altitude dependent wind field. The time optimal control problem possesses a singular control structure when the lift coefficient is taken as a control variable. A singular arc analysis is performed to obtain an optimal control solution on the singular arc. Using a time-scale separation with the flight path angle treated as a fast state, the dimensionality of the optimal control solution is reduced by eliminating the lift coefficient control. A further singular arc analysis is used to decompose the original optimal control solution into the flight path angle solution and a trajectory solution as a function of the airspeed and altitude. The optimal control solutions for the initial and final climb segments are computed using a shooting method with known starting values on the singular arc The numerical results of the shooting method show that the optimal flight path angle on the initial and final climb segments are constant. The analytical approach provides a rapid means for analyzing a time optimal trajectory for aircraft performance.
Modeling chemical vapor deposition of silicon dioxide in microreactors at atmospheric pressure
NASA Astrophysics Data System (ADS)
Konakov, S. A.; Krzhizhanovskaya, V. V.
2015-01-01
We developed a multiphysics mathematical model for simulation of silicon dioxide Chemical Vapor Deposition (CVD) from tetraethyl orthosilicate (TEOS) and oxygen mixture in a microreactor at atmospheric pressure. Microfluidics is a promising technology with numerous applications in chemical synthesis due to its high heat and mass transfer efficiency and well-controlled flow parameters. Experimental studies of CVD microreactor technology are slow and expensive. Analytical solution of the governing equations is impossible due to the complexity of intertwined non-linear physical and chemical processes. Computer simulation is the most effective tool for design and optimization of microreactors. Our computational fluid dynamics model employs mass, momentum and energy balance equations for a laminar transient flow of a chemically reacting gas mixture at low Reynolds number. Simulation results show the influence of microreactor configuration and process parameters on SiO2 deposition rate and uniformity. We simulated three microreactors with the central channel diameter of 5, 10, 20 micrometers, varying gas flow rate in the range of 5-100 microliters per hour and temperature in the range of 300-800 °C. For each microchannel diameter we found an optimal set of process parameters providing the best quality of deposited material. The model will be used for optimization of the microreactor configuration and technological parameters to facilitate the experimental stage of this research.
Logistics Solution for Choosing Location of Production of Road Construction Enterprise
NASA Astrophysics Data System (ADS)
Gavrilina, I.; Bondar, A.
2017-11-01
The current state of construction of highways indicates that not all the resources of the construction organization are implemented and supported by the modern approaches in logistics problems solving. This article deals with the solution of these problems and considers the features of basic road linear works organization, their large extent and different locations of enterprises. Analyzing these data, it is proposed to simulate the logistics processes and substantiate the methods of transport operations organizing by linking the technology and the organization road construction materials delivery which allows one to optimize the construction processes, to choose the most economically advantageous options, and also to monitor the quality of work.
Optimal solution and optimality condition of the Hunter-Saxton equation
NASA Astrophysics Data System (ADS)
Shen, Chunyu
2018-02-01
This paper is devoted to the optimal distributed control problem governed by the Hunter-Saxton equation with constraints on the control. We first investigate the existence and uniqueness of weak solution for the controlled system with appropriate initial value and boundary conditions. In contrast with our previous research, the proof of solution mapping is local Lipschitz continuous, which is one big improvement. Second, based on the well-posedness result, we find a unique optimal control and optimal solution for the controlled system with the quadratic cost functional. Moreover, we establish the sufficient and necessary optimality condition of an optimal control by means of the optimal control theory, not limited to the necessary condition, which is another major novelty of this paper. We also discuss the optimality conditions corresponding to two physical meaningful distributed observation cases.
The Potential of Cold Plasma for Safe and Sustainable Food Production.
Bourke, Paula; Ziuzina, Dana; Boehm, Daniela; Cullen, Patrick J; Keener, Kevin
2018-06-01
Cold plasma science and technology is increasingly investigated for translation to a plethora of issues in the agriculture and food sectors. The diversity of the mechanisms of action of cold plasma, and the flexibility as a standalone technology or one that can integrate with other technologies, provide a rich resource for driving innovative solutions. The emerging understanding of the longer-term role of cold plasma reactive species and follow-on effects across a range of systems will suggest how cold plasma may be optimally applied to biological systems in the agricultural and food sectors. Here we present the current status, emerging issues, regulatory context, and opportunities of cold plasma with respect to the broad stages of primary and secondary food production. Copyright © 2017 Elsevier Ltd. All rights reserved.
Todaro, Aldo; Peluso, Orazio; Catalano, Anna Eghle; Mauromicale, Giovanni; Spagna, Giovanni
2010-02-10
Several papers helped with the development of more methods to control browning, or study thermal polyphenol oxidase (PPO) inactivation, but did not provide any solutions to technological process problems and food process improvement. Artichokes [ Cynara cardunculus L. var. scolymus L. (Fiori)] are susceptible to browning; this alteration could affect and reduce the suitability for its use, fresh or processed. Within this study, the catecholase and cresolase activities of PPO from three different Sicilian artichokes cultivar were characterized with regard to substrate specificity and enzyme kinetics, optimum pH and temperature, temperature and pH stability, and inhibitor test; all of the results were used for technological purposes, particularly to optimize minimally processed productions (ready-to-eat and cook-chilled artichokes).
Zhao, Xiang; Zhang, Mingkun; Wei, Dongshan; Wang, Yunxia; Yan, Shihan; Liu, Mengwan; Yang, Xiang; Yang, Ke; Cui, Hong-Liang; Fu, Weiling
2017-10-01
The aptamer and target molecule binding reaction has been widely applied for construction of aptasensors, most of which are labeled methods. In contrast, terahertz technology proves to be a label-free sensing tool for biomedical applications. We utilize terahertz absorption spectroscopy and molecular dynamics simulation to investigate the variation of binding-induced collective vibration of hydrogen bond network in a mixed solution of MUC1 peptide and anti-MUC1 aptamer. The results show that binding-induced alterations of hydrogen bond numbers could be sensitively reflected by the variation of terahertz absorption coefficients of the mixed solution in a customized fluidic chip. The minimal detectable concentration is determined as 1 pmol/μL, which is approximately equal to the optimal immobilized concentration of aptasensors.
Rossetti, V Alunno; Di Palma, L; Medici, F
2002-01-01
Results are presented of experiments performed to optimize the solidification/stabilization system for metallic elements in aqueous solution. This system involves mixing cement and a solution of metallic elements in a conventional mixer: the paste thus obtained is transferred drop by drop into a recipient filled with an aqueous solution of NaOH at 20% by weight, in which it solidifies immediately. The separate use of chloride solutions of Li+, Cr3+, Pb2+ and Zn2+ makes it possible to obtain granules displaying various levels of compressive strength. Three different inertization matrices were used in the experiments, the first consisting solely of Portland cement, the second of Portland cement and a superplasticizer additive, and the third of Portland cement partially replaced with silica-fume and superplasticizer. The results of the tests performed showed a very low level of leaching into the alkaline solidification solution for Cr3+, the quantity leached being under 2% as against higher levels for the other metallic elements. For all the considered elements, the best results were obtained by using silica-fume in the inertization matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haman, R.L.; Kerry, T.G.; Jarc, C.A.
1996-12-31
A technology provided by Ultramax Corporation and EPRI, based on sequential process optimization (SPO), is being used as a cost-effective tool to gain improvements prior to decisions for capital-intensive solutions. This empirical method of optimization, called the ULTRAMAX{reg_sign} Method, can determine the best boiler capabilities and help delay, or even avoid, expensive retrofits or repowering. SPO can serve as a least-cost way to attain the right degree of compliance with current and future phases of CAAA. Tuning ensures a staged strategy to stay ahead of emissions regulations, but not so far ahead as to cause regret for taking actions thatmore » ultimately are not mandated or warranted. One large utility investigating SPO as a tool to lower NO{sub x} emissions and to optimize boiler performance is Detroit Edison. The company has applied SPO to tune two coal-fired units at its River Rouge Power Plant to evaluate the technology for possible system-wide usage. Following the successful demonstration in reducing NO{sub x} from these units, SPO is being considered for use in other Detroit Edison fossil-fired plants. Tuning first will be used as a least-cost option to drive NO{sub x} to its lowest level with operating adjustment. In addition, optimization shows the true capability of the units and the margins available when the Phase 2 rules become effective in 2000. This paper includes a case study of the second tuning process and discusses the opportunities the technology affords.« less
A Simple Method for Amplifying RNA Targets (SMART)
McCalla, Stephanie E.; Ong, Carmichael; Sarma, Aartik; Opal, Steven M.; Artenstein, Andrew W.; Tripathi, Anubhav
2012-01-01
We present a novel and simple method for amplifying RNA targets (named by its acronym, SMART), and for detection, using engineered amplification probes that overcome existing limitations of current RNA-based technologies. This system amplifies and detects optimal engineered ssDNA probes that hybridize to target RNA. The amplifiable probe-target RNA complex is captured on magnetic beads using a sequence-specific capture probe and is separated from unbound probe using a novel microfluidic technique. Hybridization sequences are not constrained as they are in conventional target-amplification reactions such as nucleic acid sequence amplification (NASBA). Our engineered ssDNA probe was amplified both off-chip and in a microchip reservoir at the end of the separation microchannel using isothermal NASBA. Optimal solution conditions for ssDNA amplification were investigated. Although KCl and MgCl2 are typically found in NASBA reactions, replacing 70 mmol/L of the 82 mmol/L total chloride ions with acetate resulted in optimal reaction conditions, particularly for low but clinically relevant probe concentrations (≤100 fmol/L). With the optimal probe design and solution conditions, we also successfully removed the initial heating step of NASBA, thus achieving a true isothermal reaction. The SMART assay using a synthetic model influenza DNA target sequence served as a fundamental demonstration of the efficacy of the capture and microfluidic separation system, thus bridging our system to a clinically relevant detection problem. PMID:22691910
Efficient receiver tuning using differential evolution strategies
NASA Astrophysics Data System (ADS)
Wheeler, Caleb H.; Toland, Trevor G.
2016-08-01
Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.
Technology optimization techniques for multicomponent optical band-pass filter manufacturing
NASA Astrophysics Data System (ADS)
Baranov, Yuri P.; Gryaznov, Georgiy M.; Rodionov, Andrey Y.; Obrezkov, Andrey V.; Medvedev, Roman V.; Chivanov, Alexey N.
2016-04-01
Narrowband optical devices (like IR-sensing devices, celestial navigation systems, solar-blind UV-systems and many others) are one of the most fast-growing areas in optical manufacturing. However, signal strength in this type of applications is quite low and performance of devices depends on attenuation level of wavelengths out of operating range. Modern detectors (photodiodes, matrix detectors, photomultiplier tubes and others) usually do not have required selectivity or have higher sensitivity to background spectrum at worst. Manufacturing of a single component band-pass filter with high attenuation level of wavelength is resource-intensive task. Sometimes it's not possible to find solution for this problem using existing technologies. Different types of filters have technology variations of transmittance profile shape due to various production factors. At the same time there are multiple tasks with strict requirements for background spectrum attenuation in narrowband optical devices. For example, in solar-blind UV-system wavelengths above 290-300 nm must be attenuated by 180dB. In this paper techniques of multi-component optical band-pass filters assembly from multiple single elements with technology variations of transmittance profile shape for optimal signal-tonoise ratio (SNR) were proposed. Relationships between signal-to-noise ratio and different characteristics of transmittance profile shape were shown. Obtained practical results were in rather good agreement with our calculations.
New Forming Technologies for Autobody at POSCO
NASA Astrophysics Data System (ADS)
Lee, Hong-Woo; Cha, Myung-Hwan; Choi, Byung-Keun; Kim, Gyo-Sung; Park, Sung-Ho
2011-08-01
As development of car body with light weight and enhanced safety became one of the hottest issues in the auto industry, the advanced high strength steels have been broadly applied to various automotive parts over the last few years. Corresponding to this trend, POSCO has developed various types of cold and hot rolled AHSSs such as DP, TRIP, CP and FB, and continues to develop new types of steel in order to satisfy the requirement of OEMs more extensively. To provide optimal technical supports to customers, POSCO has developed more advanced EVI concept, which includes the concept of CE/SE, VA/VE, VI and PP as well as the conventional EVI strategy. To realize this concept, POSCO not only supports customers with material data, process guideline, and evaluation of formability, weld-ability, paint-ability and performance but also provides parts or sub-assemblies which demand highly advanced technologies. Especially, to accelerate adoption of AHSS in autobody, POSCO has tried to come up with optimal solutions to AHSS forming. Application of conventional forming technologies has been restricted more and more by relatively low formability of AHSS with high tensile-strength. To overcome the limitation in the forming, POSCO has recently developed new forming technologies such as hydro-forming, hot press forming, roll forming and form forming. In this paper, tailored strength HPF, hydroformed torsion beam axle and multi-directional roll forming are introduced as examples of new forming technologies.
LDRD Final Report: Global Optimization for Engineering Science Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
HART,WILLIAM E.
1999-12-01
For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.
NASA Astrophysics Data System (ADS)
Reger, Darren; Madanat, Samer; Horvath, Arpad
2015-11-01
Transportation agencies are being urged to reduce their greenhouse gas (GHG) emissions. One possible solution within their scope is to alter their pavement management system to include environmental impacts. Managing pavement assets is important because poor road conditions lead to increased fuel consumption of vehicles. Rehabilitation activities improve pavement condition, but require materials and construction equipment, which produce GHG emissions as well. The agency’s role is to decide when to rehabilitate the road segments in the network. In previous work, we sought to minimize total societal costs (user and agency costs combined) subject to an emissions constraint for a road network, and demonstrated that there exists a range of potentially optimal solutions (a Pareto frontier) with tradeoffs between costs and GHG emissions. However, we did not account for the case where the available financial budget to the agency is binding. This letter considers an agency whose main goal is to reduce its carbon footprint while operating under a constrained financial budget. A Lagrangian dual solution methodology is applied, which selects the optimal timing and optimal action from a set of alternatives for each segment. This formulation quantifies GHG emission savings per additional dollar of agency budget spent, which can be used in a cap-and-trade system or to make budget decisions. We discuss the importance of communication between agencies and their legislature that sets the financial budgets to implement sustainable policies. We show that for a case study of Californian roads, it is optimal to apply frequent, thin overlays as opposed to the less frequent, thick overlays recommended in the literature if the objective is to minimize GHG emissions. A promising new technology, warm-mix asphalt, will have a negligible effect on reducing GHG emissions for road resurfacing under constrained budgets.
Producing ammonium sulfate from flue gas desulfurization by-products
Chou, I.-Ming; Bruinius, J.A.; Benig, V.; Chou, S.-F.J.; Carty, R.H.
2005-01-01
Emission control technologies using flue gas desulfurization (FGD) have been widely adopted by utilities burning high-sulfur fuels. However, these technologies require additional equipment, greater operating expenses, and increased costs for landfill disposal of the solid by-products produced. The financial burdens would be reduced if successful high-volume commercial applications of the FGD solid by-products were developed. In this study, the technical feasibility of producing ammonium sulfate from FGD residues by allowing it to react with ammonium carbonate in an aqueous solution was preliminarily assessed. Reaction temperatures of 60, 70, and 80??C and residence times of 4 and 6 hours were tested to determine the optimal conversion condition and final product evaluations. High yields (up to 83%) of ammonium sulfate with up to 99% purity were achieved under relatively mild conditions. The optimal conversion condition was observed at 60??C and a 4-hour residence time. The results of this study indicate the technical feasibility of producing ammonium sulfate fertilizer from an FGD by-product. Copyright ?? Taylor & Francis Inc.
Mobile technologies in the management of disasters: the results of a telemedicine solution.
Cabrera, M. F.; Arredondo, M. T.; Rodriguez, A.; Quiroga, J.
2001-01-01
Nowadays a great number of applications are used to compile and transmit casualties and disasters information but there are many troubles associated with the technology as can be the communications reliability and the size and weight of the devices medical staff has to carry with. Telecommunication infrastructures support information movement among geographically dispersed locations. Recently a large family of little devices has appeared in the buyer's market. They are called Personal Digital Assistants and because of their physic and technical features, they are very useful in the emergency field. As for the communications reliability, many technologies have been developed in the last years but it is necessary to find a solution that can be used in whatever situation independently of the emergency circumstances. Facing this reality, the Spanish government funded REMAF, an ATYCA (Initiative of Support for the Technology, Security and Quality in the Industry) project. REMAF joined research groups (UPM), phone operators (Fundación Airtel Móvil) and end users (SAMUR) to build a disaster data management system conceived to use modern telemedicine systems to optimize the management in these situations, taking the advantage of the above mentioned mobile communication tools and networks. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 PMID:11825159
The Long and Winding Road to Innovation.
Beyar, Rafael
2015-07-30
Medicine is developing through biomedical technology and innovations. The goal of any innovation in medicine is to improve patient care. Exponential growth in technology has led to the unprecedented growth of medical technology over the last 50 years. Clinician-scientists need to understand the complexity of the innovation process, from concept to product release, when working to bring new clinical solutions to the bedside. Hence, an overview of the innovation process is provided herein. The process involves an invention designed to solve an unmet need, followed by prototype design and optimization, animal studies, pilot and pivotal studies, and regulatory approval. The post-marketing strategy relative to funding, along with analysis of cost benefit, is a critical component for the adoption of new technologies. Examples of the road to innovation are provided, based on the experience with development of the transcatheter aortic valve. Finally, ideas are presented to contribute to the further development of this worldwide trend in innovation.
The Long and Winding Road to Innovation
Beyar, Rafael
2015-01-01
Medicine is developing through biomedical technology and innovations. The goal of any innovation in medicine is to improve patient care. Exponential growth in technology has led to the unprecedented growth of medical technology over the last 50 years. Clinician-scientists need to understand the complexity of the innovation process, from concept to product release, when working to bring new clinical solutions to the bedside. Hence, an overview of the innovation process is provided herein. The process involves an invention designed to solve an unmet need, followed by prototype design and optimization, animal studies, pilot and pivotal studies, and regulatory approval. The post-marketing strategy relative to funding, along with analysis of cost benefit, is a critical component for the adoption of new technologies. Examples of the road to innovation are provided, based on the experience with development of the transcatheter aortic valve. Finally, ideas are presented to contribute to the further development of this worldwide trend in innovation. PMID:26241234
Monitoring of Overhead Transmission Lines: A Review from the Perspective of Contactless Technologies
NASA Astrophysics Data System (ADS)
Khawaja, Arsalan Habib; Huang, Qi; Khan, Zeashan Hameed
2017-12-01
This paper describes a comprehensive review of non-contact technologies for overhead power transmission lines. Due to ever increasing emphasis on reducing accidents and speeding up diagnosis for automatically controlled grids, real time remote sensing and actuation is the new horizon for smart grid implementation. The technology overview with emphasis on the practical implementation of advanced non-contact technologies is discussed in this paper while considering optimization of the high voltage transmission lines parameters. In case of fault, the voltage and the current exceed limits of operation and hence real time reporting for control and diagnosis is a critical requirement. This paper aims to form a strong foundation for control and diagnosis of future power distribution systems so that a practitioner or researcher can make choices for a workable solution in smart grid implementation based on non-contact sensing.
NASA Astrophysics Data System (ADS)
McGarity, A. E.
2009-12-01
Recent progress has been made developing decision-support models for optimal deployment of best management practices (BMP’s) in an urban watershed to achieve water quality goals. One example is the high-level screening model StormWISE, developed by the author (McGarity, 2006) that uses linear and nonlinear programming to narrow the search for optimal solutions to certain land use categories and drainage zones. Another example is the model SUSTAIN developed by USEPA and Tetra Tech (Lai, et al., 2006), which builds on the work of Yu, et al., 2002), that uses a detailed, computationally intensive simulation model driven by a genetic solver to select optimal BMP sites. However, a model that deals only with best management practice (BMP) site selections may fail to consider solutions that avoid future nonpoint pollutant loadings by preserving undeveloped land. This paper presents results of a recently completed research project in which water resource engineers partnered with experienced professionals at a land conservation trust to develop a multiobjective model for watershed management. The result is a revised version of StormWISE that can be used to identify optimal, cost-effective combinations of easements and similar land preservation tools for undeveloped sites along with low impact development (LID) and BMP technologies for developed sites. The goal is to achieve the watershed-wide limits on runoff volume and pollutant loads that are necessary to meet water quality goals as well as ecological benefits associated with habitat preservation and enhancement. A nonlinear programming formulation is presented for the extended StormWISE model that achieves desired levels of environmental benefits at minimum cost. Tradeoffs between different environmental benefits are generated by multiple runs of the model while varying the levels of each environmental benefit obtained. The model is solved using piecewise linearization of environmental benefit functions where each linear segment of represents a different option for reducing stormwater runoff volumes and pollutant loadings. The solutions space is comprised of optimal levels of expenditure for categories of BMP's by land use category and optimal land preservation expenditures by drainage zone. To demonstrate the usefulness of the model, results from its application to the Little Crum Creek watershed in suburban Philadelphia are presented. The model has been used to assist a watershed association and four municipalities to develop an action plan for restoration of water quality on this impaired stream. References Lai, F., J. Zhen, J. Riverson, and L. Shoemaker (2006). "SUSTAIN - An Evaluation and Cost-Optimization Tool for Placement of BMPs," ASCE World Environmental and Water Resource Congress 2006. McGarity, A.E. (2006). A Cost Minimization Model to Priortize Urban Catchments for Stormwater BMP Implementation Projects. American Water Resources Association National Meeting, Baltimore, MD, November, 2006. Yu, S., J. X. Zhen, and S.Y. Zhai, (2002). Development of Stormwater Best Management Practice Placement Strategy for the Virginia Department of Transportation. Final Contract Report, VTRC 04-CR9, Virginia Transportation Research Council.
NASA Astrophysics Data System (ADS)
Ruchkinova, O.; Shchuckin, I.
2017-06-01
Its proved, that phytofilters are environmental friendly solution of problem of purification of surface plate from urbanized territories. Phytofilters answer the nowadays purposes to systems of purification of land drainage. The main problem of it is restrictions, connecter with its use in the conditions of cold temperature. Manufactured a technology and mechanism, which provide a whole-year purification of surface plate and its storage. Experimentally stated optimal makeup of filtering load: peat, zeolite and sand in per cent of volume, which provides defined hydraulic characteristics. Stated sorbate and ion-selective volume of complex filtering load of ordered composition in dynamic conditions. Estimated dependences of exit concentrations of oil products and heavy metals on temperature by filtering through complex filtering load of ordered composition. Defined effectiveness of purification at phytofiltering installation. Fixed an influence of embryophytes on process of phytogeneration and capacity of filtering load. Recommended swamp iris, mace reed and reed grass. Manufactured phytofilter calculation methodology. Calculated economic effect from use of phytofiltration technology in comparison with traditional block-modular installations.
Design and development of a microfluidic platform for use with colorimetric gold nanoprobe assays
NASA Astrophysics Data System (ADS)
Bernacka-Wojcik, Iwona
Due to the importance and wide applications of the DNA analysis, there is a need to make genetic analysis more available and more affordable. As such, the aim of this PhD thesis is to optimize a colorimetric DNA biosensor based on gold nanoprobes developed in CEMOP by reducing its price and the needed volume of solution without compromising the device sensitivity and reliability, towards the point of care use. Firstly, the price of the biosensor was decreased by replacing the silicon photodetector by a low cost, solution processed TiO2 photodetector. To further reduce the photodetector price, a novel fabrication method was developed: a cost-effective inkjet printing technology that enabled to increase TiO2 surface area. Secondly, the DNA biosensor was optimized by means of microfluidics that offer advantages of miniaturization, much lower sample/reagents consumption, enhanced system performance and functionality by integrating different components. In the developed microfluidic platform, the optical path length was extended by detecting along the channel and the light was transmitted by optical fibres enabling to guide the light very close to the analysed solution. Microfluidic chip of high aspect ratio ( 13), smooth and nearly vertical sidewalls was fabricated in PDMS using a SU-8 mould for patterning. The platform coupled to the gold nanoprobe assay enabled detection of Mycobacterium tuberculosis using 3 mul on DNA solution, i.e. 20 times less than in the previous state-of-the-art. Subsequently, the bio-microfluidic platform was optimized in terms of cost, electrical signal processing and sensitivity to colour variation, yielding 160% improvement of colorimetric AuNPs analysis. Planar microlenses were incorporated to converge light into the sample and then to the output fibre core increasing 6 times the signal-to-losses ratio. The optimized platform enabled detection of single nucleotide polymorphism related with obesity risk (FTO) using target DNA concentration below the limit of detection of the conventionally used microplate reader (i.e. 15 ng/mul) with 10 times lower solution volume (3 mul). The combination of the unique optical properties of gold nanoprobes with microfluidic platform resulted in sensitive and accurate sensor for single nucleotide polymorphism detection operating using small volumes of solutions and without the need for substrate functionalization or sophisticated instrumentation. Simultaneously, to enable on chip reagents mixing, a PDMS micromixer was developed and optimized for the highest efficiency, low pressure drop and short mixing length. The optimized device shows 80% of mixing efficiency at Re = 0.1 in 2.5 mm long mixer with the pressure drop of 6 Pa, satisfying requirements for the application in the microfluidic platform for DNA analysis.
How Near is a Near-Optimal Solution: Confidence Limits for the Global Optimum.
1980-05-01
or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use independent near...approximate or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use inde- pendent...The objective of this paper is to indicate some relatively new statistical procedures for obtaining an upper confidence limit on G Each of these
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
2016-09-09
prenticeship Scheduling (COVAS), which performs ma- chine learning using human expert demonstration, in conjunction with optimization, to automatically and ef...ficiently produce optimal solutions to challenging real- world scheduling problems. COVAS first learns a policy from human scheduling demonstration via...apprentice- ship learning , then uses this initial solution to provide a tight bound on the value of the optimal solution, thereby substantially
NASA Astrophysics Data System (ADS)
Abdeh-Kolahchi, A.; Satish, M.; Datta, B.
2004-05-01
A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.
Optimized dispatch in a first-principles concentrating solar power production model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Michael J.; Newman, Alexandra M.; Hamilton, William T.
Concentrating solar power towers, which include a steam-Rankine cycle with molten salt thermal energy storage, is an emerging technology whose maximum effectiveness relies on an optimal operational and dispatch policy. Given parameters such as start-up and shut-down penalties, expected electricity price profiles, solar availability, and system interoperability requirements, this paper seeks a profit-maximizing solution that determines start-up and shut-down times for the power cycle and solar receiver, and the times at which to dispatch stored and instantaneous quantities of energy over a 48-h horizon at hourly fidelity. The mixed-integer linear program (MIP) is subject to constraints including: (i) minimum andmore » maximum rates of start-up and shut-down, (ii) energy balance, including energetic state of the system as a whole and its components, (iii) logical rules governing the operational modes of the power cycle and solar receiver, and (iv) operational consistency between time periods. The novelty in this work lies in the successful integration of a dispatch optimization model into a detailed techno-economic analysis tool, specifically, the National Renewable Energy Laboratory's System Advisor Model (SAM). The MIP produces an optimized operating strategy, historically determined via a heuristic. Using several market electricity pricing profiles, we present comparative results for a system with and without dispatch optimization, indicating that dispatch optimization can improve plant profitability by 5-20% and thereby alter the economics of concentrating solar power technology. While we examine a molten salt power tower system, this analysis is equally applicable to the more mature concentrating solar parabolic trough system with thermal energy storage.« less
Abdollahi, Yadollah; Sairi, Nor Asrina; Said, Suhana Binti Mohd; Abouzari-lotf, Ebrahim; Zakaria, Azmi; Sabri, Mohd Faizul Bin Mohd; Islam, Aminul; Alias, Yatimah
2015-11-05
It is believe that 80% industrial of carbon dioxide can be controlled by separation and storage technologies which use the blended ionic liquids absorber. Among the blended absorbers, the mixture of water, N-methyldiethanolamine (MDEA) and guanidinium trifluoromethane sulfonate (gua) has presented the superior stripping qualities. However, the blended solution has illustrated high viscosity that affects the cost of separation process. In this work, the blended fabrication was scheduled with is the process arranging, controlling and optimizing. Therefore, the blend's components and operating temperature were modeled and optimized as input effective variables to minimize its viscosity as the final output by using back-propagation artificial neural network (ANN). The modeling was carried out by four mathematical algorithms with individual experimental design to obtain the optimum topology using root mean squared error (RMSE), R-squared (R(2)) and absolute average deviation (AAD). As a result, the final model (QP-4-8-1) with minimum RMSE and AAD as well as the highest R(2) was selected to navigate the fabrication of the blended solution. Therefore, the model was applied to obtain the optimum initial level of the input variables which were included temperature 303-323 K, x[gua], 0-0.033, x[MDAE], 0.3-0.4, and x[H2O], 0.7-1.0. Moreover, the model has obtained the relative importance ordered of the variables which included x[gua]>temperature>x[MDEA]>x[H2O]. Therefore, none of the variables was negligible in the fabrication. Furthermore, the model predicted the optimum points of the variables to minimize the viscosity which was validated by further experiments. The validated results confirmed the model schedulability. Accordingly, ANN succeeds to model the initial components of the blended solutions as absorber of CO2 capture in separation technologies that is able to industries scale up. Copyright © 2015 Elsevier B.V. All rights reserved.
Application of the gravity search algorithm to multi-reservoir operation optimization
NASA Astrophysics Data System (ADS)
Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.
2016-12-01
Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.
Integration and Optimization of Alternative Sources of Energy in a Remote Region
NASA Astrophysics Data System (ADS)
Berberi, Pellumb; Inodnorjani, Spiro; Aleti, Riza
2010-01-01
In a remote coastal region supply of energy from national grid is insufficient for a sustainable development. Integration and optimization of local alternative renewable energy sources is an optional solution of the problem. In this paper we have studied the energetic potential of local sources of renewable energy (water, solar, wind and biomass). A bottom-up energy system optimization model is proposed in order to support planning policies for promoting the use of renewable energy sources. A software, based on multiple factors and constrains analysis for optimization energy flow is proposed, which provides detailed information for exploitation each source of energy, power and heat generation, GHG emissions and end-use sectors. Economical analysis shows that with existing technologies both stand alone and regional facilities may be feasible. Improving specific legislation will foster investments from Central or Local Governments and also from individuals, private companies or small families. The study is carried on the frame work of a FP6 project "Integrated Renewable Energy System."
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
Sousa, Vitor; Dias-Ferreira, Celia; Vaz, João M; Meireles, Inês
2018-05-01
Extensive research has been carried out on waste collection costs mainly to differentiate costs of distinct waste streams and spatial optimization of waste collection services (e.g. routes, number, and location of waste facilities). However, waste collection managers also face the challenge of optimizing assets in time, for instance deciding when to replace and how to maintain, or which technological solution to adopt. These issues require a more detailed knowledge about the waste collection services' cost breakdown structure. The present research adjusts the methodology for buildings' life-cycle cost (LCC) analysis, detailed in the ISO 15686-5:2008, to the waste collection assets. The proposed methodology is then applied to the waste collection assets owned and operated by a real municipality in Portugal (Cascais Ambiente - EMAC). The goal is to highlight the potential of the LCC tool in providing a baseline for time optimization of the waste collection service and assets, namely assisting on decisions regarding equipment operation and replacement.
Software Analytical Instrument for Assessment of the Process of Casting Slabs
NASA Astrophysics Data System (ADS)
Franěk, Zdeněk; Kavička, František; Štětina, Josef; Masarik, Miloš
2010-06-01
The paper describes the original proposal of ways of solution and function of the program equipment for assessment of the process of casting slabs. The program system LITIOS was developed and implemented in EVRAZ Vitkovice Steel Ostrava on the equipment of continuous casting of steel (further only ECC). This program system works on the data warehouse of technological parameters of casting and quality parameters of slabs. It enables an ECC technologist to analyze the course of casting melt and with using statistics methods to set the influence of single technological parameters on the duality of final slabs. The system also enables long term monitoring and optimization of the production.
A multi-objective programming model for assessment the GHG emissions in MSW management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavrotas, George, E-mail: mavrotas@chemeng.ntua.gr; Skoulaxinou, Sotiria; Gakis, Nikos
2013-09-15
Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty yearsmore » they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH{sub 4} generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the application of the model in a Greek region.« less
Analyst-centered models for systems design, analysis, and development
NASA Technical Reports Server (NTRS)
Bukley, A. P.; Pritchard, Richard H.; Burke, Steven M.; Kiss, P. A.
1988-01-01
Much has been written about the possible use of Expert Systems (ES) technology for strategic defense system applications, particularly for battle management algorithms and mission planning. It is proposed that ES (or more accurately, Knowledge Based System (KBS)) technology can be used in situations for which no human expert exists, namely to create design and analysis environments that allow an analyst to rapidly pose many different possible problem resolutions in game like fashion and to then work through the solution space in search of the optimal solution. Portions of such an environment exist for expensive AI hardware/software combinations such as the Xerox LOOPS and Intellicorp KEE systems. Efforts are discussed to build an analyst centered model (ACM) using an ES programming environment, ExperOPS5 for a simple missile system tradeoff study. By analyst centered, it is meant that the focus of learning is for the benefit of the analyst, not the model. The model's environment allows the analyst to pose a variety of what if questions without resorting to programming changes. Although not an ES per se, the ACM would allow for a design and analysis environment that is much superior to that of current technologies.
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology
NASA Astrophysics Data System (ADS)
Jia, Wen-bin; Xiao, Fu-hai
2013-03-01
The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.
NASA Astrophysics Data System (ADS)
Bogoljubova, M. N.; Afonasov, A. I.; Kozlov, B. N.; Shavdurov, D. E.
2018-05-01
A predictive simulation technique of optimal cutting modes in the turning of workpieces made of nickel-based heat-resistant alloys, different from the well-known ones, is proposed. The impact of various factors on the cutting process with the purpose of determining optimal parameters of machining in concordance with certain effectiveness criteria is analyzed in the paper. A mathematical model of optimization, algorithms and computer programmes, visual graphical forms reflecting dependences of the effectiveness criteria – productivity, net cost, and tool life on parameters of the technological process - have been worked out. A nonlinear model for multidimensional functions, “solution of the equation with multiple unknowns”, “a coordinate descent method” and heuristic algorithms are accepted to solve the problem of optimization of cutting mode parameters. Research shows that in machining of workpieces made from heat-resistant alloy AISI N07263, the highest possible productivity will be achieved with the following parameters: cutting speed v = 22.1 m/min., feed rate s=0.26 mm/rev; tool life T = 18 min.; net cost – 2.45 per hour.
Fast Optimization for Aircraft Descent and Approach Trajectory
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John
2017-01-01
We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.
COMDECOM: predicting the lifetime of screening compounds in DMSO solution.
Zitha-Bovens, Emrin; Maas, Peter; Wife, Dick; Tijhuis, Johan; Hu, Qian-Nan; Kleinöder, Thomas; Gasteiger, Johann
2009-06-01
The technological evolution of the 1990s in both combinatorial chemistry and high-throughput screening created the demand for rapid access to the compound deck to support the screening process. The common strategy within the pharmaceutical industry is to store the screening library in DMSO solution. Several studies have shown that a percentage of these compounds decompose in solution, varying from a few percent of the total to a substantial part of the library. In the COMDECOM (COMpound DECOMposition) project, the compound stability of screening compounds in DMSO solution is monitored in an accelerated thermal, hydrolytic, and oxidative decomposition program. A large database with stability data is collected, and from this database, a predictive model is being developed. The aim of this program is to build an algorithm that can flag compounds that are likely to decompose-information that is considered to be of utmost importance (e.g., in the compound acquisition process and when evaluation screening results of library compounds, as well as in the determination of optimal storage conditions).
2018-03-30
information : Francisco L. Loaiza-Lemos, Project Leader floaiza@ida.org, 703-845-687 Margaret E. Myers, Director, Information Technology and Systems...analysis is aligned with the goals and objectives of the Department of Defense (DoD) as expressed in its Global Force Management Data Initiative...the previous phases of the analysis and how can they help inform the decision process for determining the optimal mix needed to implement the planned
Liu, Fubo; Li, Guangjun; Shen, Jiuling; Li, Ligin; Bai, Sen
2017-02-01
While radiation treatment to patients with tumors in thorax and abdomen is being performed, further improvement of radiation accuracy is restricted by the tumor intra-fractional motion due to respiration. Real-time tumor tracking radiation is an optimal solution to tumor intra-fractional motion. A review of the progress of real-time dynamic multi-leaf collimator(DMLC) tracking is provided in the present review, including DMLC tracking method, time lag of DMLC tracking system, and dosimetric verification.
Pediatric palliative care and eHealth opportunities for patient-centered care.
Madhavan, Subha; Sanders, Amy E; Chou, Wen-Ying Sylvia; Shuster, Alex; Boone, Keith W; Dente, Mark A; Shad, Aziza T; Hesse, Bradford W
2011-05-01
Pediatric palliative care currently faces many challenges including unnecessary pain from insufficiently personalized treatment, doctor-patient communication breakdowns, and a paucity of usable patient-centric information. Recent advances in informatics for consumer health through eHealth initiatives have the potential to bridge known communication gaps, but overall these technologies remain under-utilized in practice. This paper seeks to identify effective uses of existing and developing health information technology (HIT) to improve communications and care within the clinical setting. A needs analysis was conducted by surveying seven pediatric oncology patients and their extended support network at the Lombardi Pediatric Clinic at Georgetown University Medical Center in May and June of 2010. Needs were mapped onto an existing inventory of emerging HIT technologies to assess what existing informatics solutions could effectively bridge these gaps. Through the patient interviews, a number of communication challenges and needs in pediatric palliative cancer care were identified from the interconnected group perspective surrounding each patient. These gaps mapped well, in most cases, to existing or emerging cyberinfrastructure. However, adoption and adaptation of appropriate technologies could improve, including for patient-provider communication, behavioral support, pain assessment, and education, all through integration within existing work flows. This study provides a blueprint for more optimal use of HIT technologies, effectively utilizing HIT standards-based technology solutions to improve communication. This research aims to further stimulate the development and adoption of interoperable, standardized technologies and delivery of context-sensitive information to substantially improve the quality of care patients receive within pediatric palliative care clinics and other settings. Copyright © 2011 American Journal of Preventive Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Berezhetskyy, A.
2008-09-01
Researches are focused on the elaboration of enzymatic microconductometric device for heavy metal ions detection in water solutions. The manuscript includes a general introduction, the first chapter contains bibliographic review, the second chapter described the fundamentals of conductometric transducers, the third chapter examining the possibility to create and to optimize conductometric biosensor based on bovine alkaline phosphatase for heavy metals ions detection, the fourth chapter devoted to creation and optimization of conductometric biosensor based on alkaline phosphatase active microalgae and sol gel technology, the last chapter described application of the proposed algal biosensor for measurements of heavy metal ions toxicity of waste water, general conclusions stating the progresses achieved in the field of environmental monitoring
Automated distribution system management for multichannel space power systems
NASA Technical Reports Server (NTRS)
Fleck, G. W.; Decker, D. K.; Graves, J.
1983-01-01
A NASA sponsored study of space power distribution system technology is in progress to develop an autonomously managed power system (AMPS) for large space power platforms. The multichannel, multikilowatt, utility-type power subsystem proposed presents new survivability requirements and increased subsystem complexity. The computer controls under development for the power management system must optimize the power subsystem performance and minimize the life cycle cost of the platform. A distribution system management philosophy has been formulated which incorporates these constraints. Its implementation using a TI9900 microprocessor and FORTH as the programming language is presented. The approach offers a novel solution to the perplexing problem of determining the optimal combination of loads which should be connected to each power channel for a versatile electrical distribution concept.
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
NASA Astrophysics Data System (ADS)
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2016-08-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
Optimal impulsive manoeuvres and aerodynamic braking
NASA Technical Reports Server (NTRS)
Jezewski, D. J.
1985-01-01
A method developed for obtaining solutions to the aerodynamic braking problem, using impulses in the exoatmospheric phases is discussed. The solution combines primer vector theory and the results of a suboptimal atmospheric guidance program. For a specified initial and final orbit, the solution determines: (1) the minimum impulsive cost using a maximum of four impulses, (2) the optimal atmospheric entry and exit-state vectors subject to equality and inequality constraints, and (3) the optimal coast times. Numerical solutions which illustrate the characteristics of the solution are presented.
A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.
Yang, Shaofu; Liu, Qingshan; Wang, Jun
2018-04-01
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs
Liu, Wan-Yu; Chou, Chun-Hung
2014-01-01
This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990
Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale
NASA Astrophysics Data System (ADS)
Canali, L.; Baranowski, Z.; Kothuri, P.
2017-10-01
This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.
Smart City Through a Flexible Approach to Smart Energy
NASA Astrophysics Data System (ADS)
Mutule, A.; Teremranova, J.; Antoskovs, N.
2018-02-01
The paper provides an overview of the development trends of the smart city. Over the past decades, the trend of the new urban model called smart city has been gaining momentum, which is an aggregate of the latest technologies, intelligent administration and conscious citizens, which allows the city to actively develop, and effectively and efficiently solve the problems it is facing. Profound changes are also taking place in the energy sector. Researchers and other specialists offer a wide variety of innovative solutions and approaches for the concepts of intelligent cities. The paper reviews and analyses the existing methodological solutions in the field of power industry, as well as provides recommendations how to introduce the common platform on the basis of disparate sources of information on energy resources existing in the city as an optimal solution for developing the city's intelligence, flexibility and sustainability based on its starting conditions.
Optimal control theory for non-scalar-valued performance criteria. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gerring, H. P.
1971-01-01
The theory of optimal control for nonscalar-valued performance criteria is discussed. In the space, where the performance criterion attains its value, the relations better than, worse than, not better than, and not worse than are defined by a partial order relation. The notion of optimality splits up into superiority and non-inferiority, because worse than is not the complement of better than, in general. A superior solution is better than every other solution. A noninferior solution is not worse than any other solution. Noninferior solutions have been investigated particularly for vector-valued performance criteria. Superior solutions for non-scalar-valued performance criteria attaining their values in abstract partially ordered spaces are emphasized. The main result is the infimum principle which constitutes necessary conditions for a control to be a superior solution to an optimal control problem.
Reducing Weight for Transportation Applications: Technology Challenges and Opportunities
NASA Astrophysics Data System (ADS)
Taub, Alan I.
Today's land, sea and air transportation industries — as a business necessity — are focused on technology solutions that will make vehicles more sustainable in terms of energy, the environment, safety and affordability. Reducing vehicle weight is a key enabler for meeting these challenges as well as increasing payload and improving performance. The potential weight reductions from substituting lightweight metals (advanced high-strength steels, aluminum, magnesium and titanium alloys) are well established. For magnesium castings, weight savings of 60% have been reported [1]. The value of weight reduction depends on the transportation sector and ranges from about 5/kg saved for automobiles to over 500/kg saved for aircraft [2]. The challenge is to optimize the material properties and develop robust, high volume, manufacturing technologies and the associated supply chain to fabricate components and subsystems at the appropriate cost for each application.
Planning Multitechnology Access Networks with Performance Constraints
NASA Astrophysics Data System (ADS)
Chamberland, Steven
Considering the number of access network technologies and the investment needed for the “last mile” of a solution, in today’s highly competitive markets, planning tools are crucial for the service providers to optimize the network costs and accelerate the planning process. In this paper, we propose to tackle the problem of planning access networks composed of four technologies/architectures: the digital subscriber line (xDSL) technologies deployed directly from the central office (CO), the fiber-to-the-node (FTTN), the fiber-to-the-micro-node (FTTn) and the fiber-to-the-premises (FTTP). A mathematical programming model is proposed for this planning problem that is solved using a commercial implementation of the branch-and-bound algorithm. Next, a detailed access network planning example is presented followed by a systematic set of experiments designed to assess the performance of the proposed approach.
Enhancing Polyhedral Relaxations for Global Optimization
ERIC Educational Resources Information Center
Bao, Xiaowei
2009-01-01
During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…
Applying Semantic Web Services and Wireless Sensor Networks for System Integration
NASA Astrophysics Data System (ADS)
Berkenbrock, Gian Ricardo; Hirata, Celso Massaki; de Oliveira Júnior, Frederico Guilherme Álvares; de Oliveira, José Maria Parente
In environments like factories, buildings, and homes automation services tend to often change during their lifetime. Changes are concerned to business rules, process optimization, cost reduction, and so on. It is important to provide a smooth and straightforward way to deal with these changes so that could be handled in a faster and low cost manner. Some prominent solutions use the flexibility of Wireless Sensor Networks and the meaningful description of Semantic Web Services to provide service integration. In this work, we give an overview of current solutions for machinery integration that combine both technologies as well as a discussion about some perspectives and open issues when applying Wireless Sensor Networks and Semantic Web Services for automation services integration.
NASA Astrophysics Data System (ADS)
Nardello, Marco; Centro, Sandro
2017-09-01
TwinFocus® is a CPV solution that adopts quasi-parabolic, off axis mirrors, to obtain a concentration of 760× on 3J solar cells (Azur space technology) with 44% efficiency. The adoption of this optical solution allows for a cheap, lightweight and space efficient system. In particular, the addition of a secondary optics to the mirror, grants an efficient use of space, with very low thicknesses and a compact modular design. Materials are recyclable and allow for reduction of weights to a minimum level. The product is realized through the cooperation of leading edge industries active in automotive lighting and plastic materials molding. The produced prototypes provide up to 27.6% efficiency according to tests operated on the field with non-optimal spectral conditions.
Study on gold concentrate leaching by iodine-iodide
NASA Astrophysics Data System (ADS)
Wang, Hai-xia; Sun, Chun-bao; Li, Shao-ying; Fu, Ping-feng; Song, Yu-guo; Li, Liang; Xie, Wen-qing
2013-04-01
Gold extraction by iodine-iodide solution is an effective and environment-friendly method. In this study, the method using iodine-iodide for gold leaching is proved feasible through thermodynamic calculation. At the same time, experiments on flotation gold concentrates were carried out and encouraging results were obtained. Through optimizing the technological conditions, the attained high gold leaching rate is more than 85%. The optimum process conditions at 25°C are shown as follows: the initial iodine concentration is 1.0%, the iodine-to-iodide mole ratio is 1:8, the solution pH value is 7, the liquid-to-solid mass ratio is 4:1, the leaching time is 4 h, the stirring intensity is 200 r/mim, and the hydrogen peroxide consumption is 1%.
NASA Astrophysics Data System (ADS)
Schastlivtsev, A. I.; Borzenko, V. I.
2017-11-01
The comparative feasibility study of the energy storage technologies showed good applicability of hydrogen-oxygen steam generators (HOSG) based energy storage systems with large-scale hydrogen production. The developed scheme solutions for the use of HOSGs for thermal power (TPP) and nuclear power plants (NPP), and the feasibility analysis that have been carried out have shown that their use makes it possible to increase the maneuverability of steam turbines and provide backup power supply in the event of failure of the main steam generating equipment. The main design solutions for the integration of hydrogen-oxygen steam generators into the main power equipment of TPPs and NPPs, as well as their optimal operation modes, are considered.
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.
2015-01-01
The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.
Neural dynamic programming and its application to control systems
NASA Astrophysics Data System (ADS)
Seong, Chang-Yun
There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.
Jiang, Congbiao; Zou, Jianhua; Liu, Yu; Song, Chen; He, Zhiwei; Zhong, Zhenji; Wang, Jian; Yip, Hin-Lap; Peng, Junbiao; Cao, Yong
2018-06-15
Solution-processed electroluminescent tandem white quantum-dot light-emitting diodes (TWQLEDs) have the advantages of being low-cost and high-efficiency and having a wide color gamut combined with color filters, making this a promising backlight technology for high-resolution displays. However, TWQLEDs are rarely reported due to the challenge of designing device structures and the deterioration of film morphology with component layers that can be deposited from solutions. Here, we report an interconnecting layer with the optical, electrical, and mechanical properties required for fully solution-processed TWQLED. The optimized TWQLEDs exhibit a state-of-the-art current efficiency as high as 60.4 cd/A and an extremely high external quantum efficiency of 27.3% at a luminance of 100 000 cd/m 2 . A high color gamut of 124% NTSC 1931 standard can be achieved when combined with commercial color filters. These results represent the highest performance for solution-processed WQLEDs, unlocking the great application potential of TWQLEDs as backlights for new-generation displays.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Ting, Eric; Chaparro, Daniel; Drew, Michael; Swei, Sean
2017-01-01
As aircraft wings become much more flexible due to the use of light-weight composites material, adverse aerodynamics at off-design performance can result from changes in wing shapes due to aeroelastic deflections. Increased drag, hence increased fuel burn, is a potential consequence. Without means for aeroelastic compensation, the benefit of weight reduction from the use of light-weight material could be offset by less optimal aerodynamic performance at off-design flight conditions. Performance Adaptive Aeroelastic Wing (PAAW) technology can potentially address these technical challenges for future flexible wing transports. PAAW technology leverages multi-disciplinary solutions to maximize the aerodynamic performance payoff of future adaptive wing design, while addressing simultaneously operational constraints that can prevent the optimal aerodynamic performance from being realized. These operational constraints include reduced flutter margins, increased airframe responses to gust and maneuver loads, pilot handling qualities, and ride qualities. All of these constraints while seeking the optimal aerodynamic performance present themselves as a multi-objective flight control problem. The paper presents a multi-objective flight control approach based on a drag-cognizant optimal control method. A concept of virtual control, which was previously introduced, is implemented to address the pair-wise flap motion constraints imposed by the elastomer material. This method is shown to be able to satisfy the constraints. Real-time drag minimization control is considered to be an important consideration for PAAW technology. Drag minimization control has many technical challenges such as sensing and control. An initial outline of a real-time drag minimization control has already been developed and will be further investigated in the future. A simulation study of a multi-objective flight control for a flight path angle command with aeroelastic mode suppression and drag minimization demonstrates the effectiveness of the proposed solution. In-flight structural loads are also an important consideration. As wing flexibility increases, maneuver load and gust load responses can be significant and therefore can pose safety and flight control concerns. In this paper, we will extend the multi-objective flight control framework to include load alleviation control. The study will focus initially on maneuver load minimization control, and then subsequently will address gust load alleviation control in future work.
NASA Astrophysics Data System (ADS)
Heredia-Munoz, Manuel Antonio
Water is perhaps the most important resource that sustains human life. According to the World Health Organization (WHO), almost two billion people do not have access to the required water that is needed to satisfy their daily needs and one billion do not have access to clean sources of water for consumption, most of them living in isolated and poor areas around the globe. Poor quality water increases the risk of cholera, typhoid fever and dysentery, and other water-borne illness making this problem a real crisis that humankind is facing. Several water disinfection technologies have been proposed as solutions for this problem. Solar water disinfection using TiO2 coated PET bottles was the alternative that is studied in this work. This technology does not only inactivate bacteria but also disintegrates organic chemicals that can be present in water. The objectives of this work address the optimization of the TiO 2 coated PET bottles technologies. The improvement on the bottle coating process, using two coats of 10% W/V of TiO2 in a solution of vinegar and sodium bicarbonate to form the TiO2 film, the use of a different indigo carmine (1.25 X 10-1mg/pill) concentration in the pill indicator of contamination, the increase of the disinfection rate through shaking the bottles, degradation under intermittent UV radiation and the effect of bottle size on photocatalytic water disinfection were among the most important findings. A new mathematical model that describes better photocatalytic water disinfection in TiO2 coated bottles and simulates water disinfection under different working conditions was another important achievement. These results can now be used to design a strategy for disseminating this technology in areas where it is required and, in that way, generate the greatest positive impact on the people needing safe drinking water.
A graph decomposition-based approach for water distribution network optimization
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.
2013-04-01
A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system.
Final Report - Stationary and Emerging Market Fuel Cell System Cost Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contini, Vince; Heinrichs, Mike; George, Paul
The U.S. Department of Energy (DOE) is focused on providing a portfolio of technology solutions to meet energy security challenges of the future. Fuel cells are a part of this portfolio of technology offerings. To help meet these challenges and supplement the understanding of the current research, Battelle has executed a five-year program that evaluated the total system costs and total ownership costs of two technologies: (1) an ~80 °C polymer electrolyte membrane fuel cell (PEMFC) technology and (2) a solid oxide fuel cell (SOFC) technology, operating with hydrogen or reformate for different applications. Previous research conducted by Battelle, andmore » more recently by other research institutes, suggests that fuel cells can offer customers significant fuel and emission savings along with other benefits compared to incumbent alternatives. For this project, Battelle has applied a proven cost assessment approach to assist the DOE Fuel Cell Technologies Program in making decisions regarding research and development, scale-up, and deployment of fuel cell technology. The cost studies and subsequent reports provide accurate projections of current system costs and the cost impact of state-of-the-art technologies in manufacturing, increases in production volume, and changes to system design on system cost and life cycle cost for several near-term and emerging fuel cell markets. The studies also provide information on types of manufacturing processes that must be developed to commercialize fuel cells and also provide insights into the optimization needed for use of off-the-shelf components in fuel cell systems. Battelle’s analysis is intended to help DOE prioritize investments in research and development of components to reduce the costs of fuel cell systems while considering systems optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghatikar, Girish; Mashayekh, Salman; Stadler, Michael
Distributed power systems in the U.S. and globally are evolving to provide reliable and clean energy to consumers. In California, existing regulations require significant increases in renewable generation, as well as identification of customer-side distributed energy resources (DER) controls, communication technologies, and standards for interconnection with the electric grid systems. As DER deployment expands, customer-side DER control and optimization will be critical for system flexibility and demand response (DR) participation, which improves the economic viability of DER systems. Current DER systems integration and communication challenges include leveraging the existing DER and DR technology and systems infrastructure, and enabling optimized cost,more » energy and carbon choices for customers to deploy interoperable grid transactions and renewable energy systems at scale. Our paper presents a cost-effective solution to these challenges by exploring communication technologies and information models for DER system integration and interoperability. This system uses open standards and optimization models for resource planning based on dynamic-pricing notifications and autonomous operations within various domains of the smart grid energy system. It identifies architectures and customer engagement strategies in dynamic DR pricing transactions to generate feedback information models for load flexibility, load profiles, and participation schedules. The models are tested at a real site in California—Fort Hunter Liggett (FHL). Furthermore, our results for FHL show that the model fits within the existing and new DR business models and networked systems for transactive energy concepts. Integrated energy systems, communication networks, and modeling tools that coordinate supply-side networks and DER will enable electric grid system operators to use DER for grid transactions in an integrated system.« less
Sparseness- and continuity-constrained seismic imaging
NASA Astrophysics Data System (ADS)
Herrmann, Felix J.
2005-04-01
Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-05-01
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Multiobjective Optimization of Low-Energy Trajectories Using Optimal Control on Dynamical Channels
NASA Technical Reports Server (NTRS)
Coffee, Thomas M.; Anderson, Rodney L.; Lo, Martin W.
2011-01-01
We introduce a computational method to design efficient low-energy trajectories by extracting initial solutions from dynamical channels formed by invariant manifolds, and improving these solutions through variational optimal control. We consider trajectories connecting two unstable periodic orbits in the circular restricted 3-body problem (CR3BP). Our method leverages dynamical channels to generate a range of solutions, and approximates the areto front for impulse and time of flight through a multiobjective optimization of these solutions based on primer vector theory. We demonstrate the application of our method to a libration orbit transfer in the Earth-Moon system.
Fuel-optimal low-thrust formation reconfiguration via Radau pseudospectral method
NASA Astrophysics Data System (ADS)
Li, Jing
2016-07-01
This paper investigates fuel-optimal low-thrust formation reconfiguration near circular orbit. Based on the Clohessy-Wiltshire equations, first-order necessary optimality conditions are derived from the Pontryagin's maximum principle. The fuel-optimal impulsive solution is utilized to divide the low-thrust trajectory into thrust and coast arcs. By introducing the switching times as optimization variables, the fuel-optimal low-thrust formation reconfiguration is posed as a nonlinear programming problem (NLP) via direct transcription using multiple-phase Radau pseudospectral method (RPM), which is then solved by a sparse nonlinear optimization software SNOPT. To facilitate optimality verification and, if necessary, further refinement of the optimized solution of the NLP, formulas for mass costate estimation and initial costates scaling are presented. Numerical examples are given to show the application of the proposed optimization method. To fix the problem, generic fuel-optimal low-thrust formation reconfiguration can be simplified as reconfiguration without any initial and terminal coast arcs, whose optimal solutions can be efficiently obtained from the multiple-phase RPM at the cost of a slight fuel increment. Finally, influence of the specific impulse and maximum thrust magnitude on the fuel-optimal low-thrust formation reconfiguration is analyzed. Numerical results shown the links and differences between the fuel-optimal impulsive and low-thrust solutions.
On the dynamic rounding-off in analogue and RF optimal circuit sizing
NASA Astrophysics Data System (ADS)
Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena
2014-04-01
Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.
Emergency Communications Network for Disasters Management in Venezuela
NASA Astrophysics Data System (ADS)
Burguillos, C.; Deng, H.
2018-04-01
The integration and use of different space technology applications for disasters management, play an important role at the time of prevents the causes and mitigates the effects of the natural disasters. Nevertheless, the space technology counts with the appropriate technological resources to provide the accurate and timely information required to support in the decision making in case of disasters. Considering the aforementioned aspects, in this research is presented the design and implementation of an Emergency Communications Network for Disasters Management in Venezuela. Network based on the design of a topology that integrates the satellites platforms in orbit operation under administration of Venezuelan state, such as: the communications satellite VENESAT-1 and the remote sensing satellites VRSS-1 and VRSS-2; as well as their ground stations with the aim to implement an emergency communications network to be activated in case of disasters which affect the public and private communications infrastructures in Venezuela. In this regard, to design the network several technical and operational specifications were formulated, between them: Emergency Strategies to Maneuver the VRSS-1 and VRSS-2 satellites for optimal images capture and processing, characterization of the VENESAT-1 transponders and radiofrequencies for emergency communications services, technologies solutions formulation and communications links design for disaster management. As result, the emergency network designed allows to put in practice diverse communications technologies solutions and different scheme or media for images exchange between the areas affected for disasters and the entities involved in the disasters management tasks, providing useful data for emergency response and infrastructures recovery.
Small Spacecraft System-Level Design and Optimization for Interplanetary Trajectories
NASA Technical Reports Server (NTRS)
Spangelo, Sara; Dalle, Derek; Longmier, Ben
2014-01-01
The feasibility of an interplanetary mission for a CubeSat, a type of miniaturized spacecraft, that uses an emerging technology, the CubeSat Ambipolar Thruster (CAT) is investigated. CAT is a large delta-V propulsion system that uses a high-density plasma source that has been miniaturized for small spacecraft applications. An initial feasibility assessment that demonstrated escaping Low Earth Orbit (LEO) and achieving Earth-escape trajectories with a 3U CubeSat and this thruster technology was demonstrated in previous work. We examine a mission architecture with a trajectory that begins in Earth orbits such as LEO and Geostationary Earth Orbit (GEO) which escapes Earth orbit and travels to Mars, Jupiter, or Saturn. The goal was to minimize travel time to reach the destinations and considering trade-offs between spacecraft dry mass, fuel mass, and solar power array size. Sensitivities to spacecraft dry mass and available power are considered. CubeSats are extremely size, mass, and power constrained, and their subsystems are tightly coupled, limiting their performance potential. System-level modeling, simulation, and optimization approaches are necessary to find feasible and optimal operational solutions to ensure system-level interactions are modeled. Thus, propulsion, power/energy, attitude, and orbit transfer models are integrated to enable systems-level analysis and trades. The CAT technology broadens the possible missions achievable with small satellites. In particular, this technology enables more sophisticated maneuvers by small spacecraft such as polar orbit insertion from an equatorial orbit, LEO to GEO transfers, Earth-escape trajectories, and transfers to other interplanetary bodies. This work lays the groundwork for upcoming CubeSat launch opportunities and supports future development of interplanetary and constellation CubeSat and small satellite mission concepts.
NASA Astrophysics Data System (ADS)
Nicholson, B.; Klise, K. A.; Laird, C. D.; Ravikumar, A. P.; Brandt, A. R.
2017-12-01
In order to comply with current and future methane emissions regulations, natural gas producers must develop emissions monitoring strategies for their facilities. In addition, regulators must develop air monitoring strategies over wide areas incorporating multiple facilities. However, in both of these cases, only a limited number of sensors can be deployed. With a wide variety of sensors to choose from in terms of cost, precision, accuracy, spatial coverage, location, orientation, and sampling frequency, it is difficult to design robust monitoring strategies for different scenarios while systematically considering the tradeoffs between different sensor technologies. In addition, the geography, weather, and other site specific conditions can have a large impact on the performance of a sensor network. In this work, we demonstrate methods for calculating optimal sensor networks. Our approach can incorporate tradeoffs between vastly different sensor technologies, optimize over typical wind conditions for a particular area, and consider different objectives such as time to detection or geographic coverage. We do this by pre-computing site specific scenarios and using them as input to a mixed-integer, stochastic programming problem that solves for a sensor network that maximizes the effectiveness of the detection program. Our methods and approach have been incorporated within an open source Python package called Chama with the goal of providing facility operators and regulators with tools for designing more effective and efficient monitoring systems. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energys National Nuclear Security Administration under contract DE-NA0003525.
NASA Astrophysics Data System (ADS)
Zheng, Y.; Chen, J.
2017-09-01
A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.
High-productivity DRIE solutions for 3D-SiP and MEMS volume manufacturing
NASA Astrophysics Data System (ADS)
Puech, M.; Thevenoud, J. M.; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, J. M.
2006-12-01
Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimized to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters have shown ultra high silicon etch rate, with unrivaled uniformity and repeatability leading to excellent process yields. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. A key factor for achieving the highest performances was the recognized expertise of Alcatel vacuum and plasma science technologies. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer head and Silicon microphones.
Macromolecular Crystal Growth by Means of Microfluidics
NASA Technical Reports Server (NTRS)
vanderWoerd, Mark; Ferree, Darren; Spearing, Scott; Monaco, Lisa; Molho, Josh; Spaid, Michael; Brasseur, Mike; Curreri, Peter A. (Technical Monitor)
2002-01-01
We have performed a feasibility study in which we show that chip-based, microfluidic (LabChip(TM)) technology is suitable for protein crystal growth. This technology allows for accurate and reliable dispensing and mixing of very small volumes while minimizing bubble formation in the crystallization mixture. The amount of (protein) solution remaining after completion of an experiment is minimal, which makes this technique efficient and attractive for use with proteins, which are difficult or expensive to obtain. The nature of LabChip(TM) technology renders it highly amenable to automation. Protein crystals obtained in our initial feasibility studies were of excellent quality as determined by X-ray diffraction. Subsequent to the feasibility study, we designed and produced the first LabChip(TM) device specifically for protein crystallization in batch mode. It can reliably dispense and mix from a range of solution constituents into two independent growth wells. We are currently testing this design to prove its efficacy for protein crystallization optimization experiments. In the near future we will expand our design to incorporate up to 10 growth wells per LabChip(TM) device. Upon completion, additional crystallization techniques such as vapor diffusion and liquid-liquid diffusion will be accommodated. Macromolecular crystallization using microfluidic technology is envisioned as a fully automated system, which will use the 'tele-science' concept of remote operation and will be developed into a research facility for the International Space Station as well as on the ground.
Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization
NASA Technical Reports Server (NTRS)
Pinson, Robin; Lu, Ping
2015-01-01
This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.
Design Optimization Toolkit: Users' Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less
Exploratory High-Fidelity Aerostructural Optimization Using an Efficient Monolithic Solution Method
NASA Astrophysics Data System (ADS)
Zhang, Jenmy Zimi
This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An optimization methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural optimization methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape optimization. This integrated approach provides the optimizer with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural optimization, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for optimization are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural optimization involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory optimization. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov method for the aerostructural analysis and a preconditioned Krylov subspace method for the coupled adjoint solution. Several aspects of the monolithic solution method have been investigated. These include appropriate strategies for scaling and matrix-vector product evaluation, as well as block preconditioning techniques that preserve the modularity between subproblems. The monolithic solution method is applied to problems with varying degrees of fluid-structural coupling, as well as a wing span optimization study. The monolithic solution algorithm typically requires 20%-70% less computing time than its partitioned counterpart. This advantage increases with increasing wing flexibility. The performance of the monolithic solution method is also much less sensitive to the choice of the solution parameter.
Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation
NASA Astrophysics Data System (ADS)
Ventura, Jacopo; Romano, Marcello; Walter, Ulrich
2015-05-01
This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.
Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation
NASA Technical Reports Server (NTRS)
Mook, D. J.; Lew, Jiann-Shiun
1991-01-01
Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Building science research supports installing exterior (soil side) foundation insulation as the optimal method to enhance the hygrothermal performance of new homes. With exterior foundation insulation, water management strategies are maximized while insulating the basement space and ensuring a more even temperature at the foundation wall. This project describes an innovative, minimally invasive foundation insulation upgrade technique on an existing home that uses hydrovac excavation technology combined with a liquid insulating foam. Cost savings over the traditional excavation process ranged from 23% to 50%. The excavationless process could result in even greater savings since replacement of building structures, exterior features,more » utility meters, and landscaping would be minimal or non-existent in an excavationless process.« less
Groves, Maria AT; Amanuel, Lily; Campbell, Jamie I; Rees, D Gareth; Sridharan, Sudharsan; Finch, Donna K; Lowe, David C; Vaughan, Tristan J
2014-01-01
In vitro selection technologies are an important means of affinity maturing antibodies to generate the optimal therapeutic profile for a particular disease target. Here, we describe the isolation of a parent antibody, KENB061 using phage display and solution phase selections with soluble biotinylated human IL-1R1. KENB061 was affinity matured using phage display and targeted mutagenesis of VH and VL CDR3 using NNS randomization. Affinity matured VHCDR3 and VLCDR3 library blocks were recombined and selected using phage and ribosome display protocol. A direct comparison of the phage and ribosome display antibodies generated was made to determine their functional characteristics. PMID:24256948
Tanyimboh, Tiku T; Seyoum, Alemtsehay G
2016-12-01
This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
Laboratory automation: trajectory, technology, and tactics.
Markin, R S; Whalen, S A
2000-05-01
Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a modular approach, from a hardware-driven system to process control, from a one-of-a-kind novelty toward a standardized product, and from an in vitro diagnostics novelty to a marketing tool. Multiple vendors are present in the marketplace, many of whom are in vitro diagnostics manufacturers providing an automation solution coupled with their instruments, whereas others are focused automation companies. Automation technology continues to advance, acceptance continues to climb, and payback and cost justification methods are developing.
NASA Astrophysics Data System (ADS)
Basso, Stefano; Civitani, Marta; Pareschi, Giovanni; Buratti, Enrico; Eder, Josef; Friedrich, Peter; Fürmetz, Maria
2015-09-01
The Athena mission was selected for the second large-class mission, due for launch in 2028, in ESA's Cosmic Vision program. The current solution for the optics is based on the Silicon Pore Optics (SPO) technology with the goal of 2m2 effective area at 1keV (aperture about 3m diameter) with a focal length of 12m. The SPO advantages are the compactness along the axial direction and the high conductivity of the Silicon. Recent development in the fabrication of mirror shells based on the Slumped Glass Optics (SGO) makes this technology an attractive solution for the mirror modules for Athena or similar telescopes. The SGO advantages are a potential high collecting area with a limited vignetting due to the lower shadowing and the aptitude to curve the glass plates up to small radius of curvature. This study shows an alternative mirror design based on SGO technology, tailored for Athena needs. The main challenges are the optimization of the manufacturing technology with respect to the required accuracy and the thermal control of the large surface in conjunction with the low conductivity of the glass. A concept has been elaborated which considers the specific benefits of the SGO technology and provides an efficient thermal control. The output of the study is a preliminary design substantiated by analyses and technological studies. The study proposes interfaces and predicts performances and budgets. It describes also how such a mirror system could be implemented as a modular assembly for X-ray telescope with a large collecting area.
Removing Barriers for Effective Deployment of Intermittent Renewable Generation
NASA Astrophysics Data System (ADS)
Arabali, Amirsaman
The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.
Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S
2009-11-01
Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.
Optimality versus stability in water resource allocation.
Read, Laura; Madani, Kaveh; Inanloo, Bahareh
2014-01-15
Water allocation is a growing concern in a developing world where limited resources like fresh water are in greater demand by more parties. Negotiations over allocations often involve multiple groups with disparate social, economic, and political status and needs, who are seeking a management solution for a wide range of demands. Optimization techniques for identifying the Pareto-optimal (social planner solution) to multi-criteria multi-participant problems are commonly implemented, although often reaching agreement for this solution is difficult. In negotiations with multiple-decision makers, parties who base decisions on individual rationality may find the social planner solution to be unfair, thus creating a need to evaluate the willingness to cooperate and practicality of a cooperative allocation solution, i.e., the solution's stability. This paper suggests seeking solutions for multi-participant resource allocation problems through an economics-based power index allocation method. This method can inform on allocation schemes that quantify a party's willingness to participate in a negotiation rather than opt for no agreement. Through comparison of the suggested method with a range of distance-based multi-criteria decision making rules, namely, least squares, MAXIMIN, MINIMAX, and compromise programming, this paper shows that optimality and stability can produce different allocation solutions. The mismatch between the socially-optimal alternative and the most stable alternative can potentially result in parties leaving the negotiation as they may be too dissatisfied with their resource share. This finding has important policy implications as it justifies why stakeholders may not accept the socially optimal solution in practice, and underlies the necessity of considering stability where it may be more appropriate to give up an unstable Pareto-optimal solution for an inferior stable one. Authors suggest assessing the stability of an allocation solution as an additional component to an analysis that seeks to distribute water in a negotiated process. Copyright © 2013 Elsevier Ltd. All rights reserved.
Research of G3-PLC net self-organization processes in the NS-3 modeling framework
NASA Astrophysics Data System (ADS)
Pospelova, Irina; Chebotayev, Pavel; Klimenko, Aleksey; Myakochin, Yuri; Polyakov, Igor; Shelupanov, Alexander; Zykov, Dmitriy
2017-11-01
When modern infocommunication networks are designed, the combination of several data transfer channels is widely used. It is necessary for the purposes of improvement in quality and robustness of communication. Communication systems based on more than one data transfer channel are named heterogeneous communication systems. For the design of a heterogeneous network, the most optimal solution is the use of mesh technology. Mesh technology ensures message delivery to the destination under conditions of unpredictable interference environment situation in each of two channels. Therewith, one of the high-priority problems is the choice of a routing protocol when the mesh networks are designed. An important design stage for any computer network is modeling. Modeling allows us to design a few different variants of design solutions and also to compute all necessary functional specifications for each of these solutions. As a result, it allows us to reduce costs for the physical realization of a network. In this article the research of dynamic routing in the NS3 simulation modeling framework is presented. The article contains an evaluation of simulation modeling applicability in solving the problem of heterogeneous networks design. Results of modeling may be afterwards used for physical realization of this kind of networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroemqvist, Martin H., E-mail: stromqv@kth.se
We study the problem of optimally controlling the solution of the obstacle problem in a domain perforated by small periodically distributed holes. The solution is controlled by the choice of a perforated obstacle which is to be chosen in such a fashion that the solution is close to a given profile and the obstacle is not too irregular. We prove existence, uniqueness and stability of an optimal obstacle and derive necessary and sufficient conditions for optimality. When the number of holes increase indefinitely we determine the limit of the sequence of optimal obstacles and solutions. This limit depends strongly onmore » the rate at which the size of the holes shrink.« less
A solution quality assessment method for swarm intelligence optimization algorithms.
Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua
2014-01-01
Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.
NASA Technical Reports Server (NTRS)
Chen, Guanrong
1991-01-01
An optimal trajectory planning problem for a single-link, flexible joint manipulator is studied. A global feedback-linearization is first applied to formulate the nonlinear inequality-constrained optimization problem in a suitable way. Then, an exact and explicit structural formula for the optimal solution of the problem is derived and the solution is shown to be unique. It turns out that the optimal trajectory planning and control can be done off-line, so that the proposed method is applicable to both theoretical analysis and real time tele-robotics control engineering.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2012-01-01
In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.
A Cloud Architecture for Teleradiology-as-a-Service.
Melício Monteiro, Eriksson J; Costa, Carlos; Oliveira, José L
2016-05-17
Telemedicine has been promoted by healthcare professionals as an efficient way to obtain remote assistance from specialised centres, to get a second opinion about complex diagnosis or even to share knowledge among practitioners. The current economic restrictions in many countries are increasing the demand for these solutions even more, in order to optimize processes and reduce costs. However, despite some technological solutions already in place, their adoption has been hindered by the lack of usability, especially in the set-up process. In this article we propose a telemedicine platform that relies on a cloud computing infrastructure and social media principles to simplify the creation of dynamic user-based groups, opening up opportunities for the establishment of teleradiology trust domains. The collaborative platform is provided as a Software-as-a-Service solution, supporting real time and asynchronous collaboration between users. To evaluate the solution, we have deployed the platform in a private cloud infrastructure. The system is made up of three main components - the collaborative framework, the Medical Management Information System (MMIS) and the HTML5 (Hyper Text Markup Language) Web client application - connected by a message-oriented middleware. The solution allows physicians to create easily dynamic network groups for synchronous or asynchronous cooperation. The network created improves dataflow between colleagues and also knowledge sharing and cooperation through social media tools. The platform was implemented and it has already been used in two distinct scenarios: teaching of radiology and tele-reporting. Collaborative systems can simplify the establishment of telemedicine expert groups with tools that enable physicians to improve their clinical practice. Streamlining the usage of this kind of systems through the adoption of Web technologies that are common in social media will increase the quality of current solutions, facilitating the sharing of clinical information, medical imaging studies and patient diagnostics among collaborators.
Challenges and opportunities in the manufacture and expansion of cells for therapy.
Maartens, Joachim H; De-Juan-Pardo, Elena; Wunner, Felix M; Simula, Antonio; Voelcker, Nicolas H; Barry, Simon C; Hutmacher, Dietmar W
2017-10-01
Laboratory-based ex vivo cell culture methods are largely manual in their manufacturing processes. This makes it extremely difficult to meet regulatory requirements for process validation, quality control and reproducibility. Cell culture concepts with a translational focus need to embrace a more automated approach where cell yields are able to meet the quantitative production demands, the correct cell lineage and phenotype is readily confirmed and reagent usage has been optimized. Areas covered: This article discusses the obstacles inherent in classical laboratory-based methods, their concomitant impact on cost-of-goods and that a technology step change is required to facilitate translation from bed-to-bedside. Expert opinion: While traditional bioreactors have demonstrated limited success where adherent cells are used in combination with microcarriers, further process optimization will be required to find solutions for commercial-scale therapies. New cell culture technologies based on 3D-printed cell culture lattices with favourable surface to volume ratios have the potential to change the paradigm in industry. An integrated Quality-by-Design /System engineering approach will be essential to facilitate the scaled-up translation from proof-of-principle to clinical validation.
Performance characterization of structured light-based fingerprint scanner
NASA Astrophysics Data System (ADS)
Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.
2013-05-01
Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.
Resource Allocation Algorithms for the Next Generation Cellular Networks
NASA Astrophysics Data System (ADS)
Amzallag, David; Raz, Danny
This chapter describes recent results addressing resource allocation problems in the context of current and future cellular technologies. We present models that capture several fundamental aspects of planning and operating these networks, and develop new approximation algorithms providing provable good solutions for the corresponding optimization problems. We mainly focus on two families of problems: cell planning and cell selection. Cell planning deals with choosing a network of base stations that can provide the required coverage of the service area with respect to the traffic requirements, available capacities, interference, and the desired QoS. Cell selection is the process of determining the cell(s) that provide service to each mobile station. Optimizing these processes is an important step towards maximizing the utilization of current and future cellular networks.
A new approach to impulsive rendezvous near circular orbit
NASA Astrophysics Data System (ADS)
Carter, Thomas; Humi, Mayer
2012-04-01
A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.
NASA Astrophysics Data System (ADS)
Kumar, Prasoon; Gandhi, Prasanna S.; Majumder, Mainak
2016-04-01
Gills are one of the most primitive gas, solute exchange organs available in fishes. They facilitate exchange of gases, solutes and ions with a surrounding water medium through their functional unit called secondary lamella. These lamellae through their extraordinary morphometric features and peculiar arrangement in gills, achieve remarkable mass transport properties. Therefore, in the current study, modeling and simulation of convection-diffusion transport through a two dimensional model of secondary lamella and theoretical analysis of morphometric features of fish gills were carried out. Such study suggested an evolutionary conservation of parametric ratios across fishes of different weights. Further, we have also fabricated a thin microvascularised PDMS matrices mimicking secondary lamella by use of micro-technologies like electrospinning. In addition, we have also demonstrated the fluid flow by capillary action through these thin microvascularised PDMS matrices. Eventually, we also illustrated the application of these thin microvascularied PDMS matrices in solute exchange process under capillary flow conditions. Thus, our study suggested that fish gills have optimized parameteric ratios, at multiple length scale, throughout an evolution to achieve an organ with enhanced mass transport capabilities. Thus, these defined parametric ratios could be exploited to design and develop efficient, scaled-up gas/solute exchange microdevices. We also proposed an inexpensive and scalable method of fabrication of thin microvascularised polymer matrices and demonstrated its solute exchange capabilities under capillary flow conditions. Thus, mimicking the microstructures of secondary lamella will enable fabrication of microvascularised thin polymer systems through micro manufacturing technologies for potential applications in filtration, self-healing/cooling materials and bioengineering.
Technology Candidates for Air-to-Air and Air-to-Ground Data Exchange
NASA Technical Reports Server (NTRS)
Haynes, Brian D.
2015-01-01
Technology Candidates for Air-to-Air and Air-to-Ground Data Exchange is a two-year research effort to visualize the U. S. aviation industry at a point 50 years in the future, and to define potential communication solutions to meet those future data exchange needs. The research team, led by XCELAR, was tasked with identifying future National Airspace System (NAS) scenarios, determining requirements and functions (including gaps), investigating technical and business issues for air, ground, & air-to-ground interactions, and reporting on the results. The project was conducted under technical direction from NASA and in collaboration with XCELAR's partner, National Institute of Aerospace, and NASA technical representatives. Parallel efforts were initiated to define the information exchange functional needs of the future NAS, and specific communication link technologies to potentially serve those needs. Those efforts converged with the mapping of each identified future NAS function to potential enabling communication solutions; those solutions were then compared with, and ranked relative to, each other on a technical basis in a structured analysis process. The technical solutions emerging from that process were then assessed from a business case perspective to determine their viability from a real-world adoption and deployment standpoint. The results of that analysis produced a proposed set of future solutions and most promising candidate technologies. Gap analyses were conducted at two points in the process, the first examining technical factors, and the second as part of the business case analysis. In each case, no gaps or unmet needs were identified in applying the solutions evaluated to the requirements identified. The future communication solutions identified in the research comprise both specific link technologies and two enabling technologies that apply to most or all specific links. As a result, the research resulted in a new analysis approach, viewing the underlying architecture of ground-air and air-air communications as a whole, rather than as simple "link to function" paired solutions. For the business case analysis, a number of "reference architectures" were developed for both the future technologies and the current systems, based on three typical configurations of current aircraft. Current and future costs were assigned, and various comparisons made between the current and future architectures. In general, it was assumed that if a future architecture offers lower cost than the current typical architecture, while delivering equivalent or better performance, it is likely that the future solution will gain industry acceptance. Conversely, future architectures presenting higher costs than their current counterparts must present a compelling benefit case in other areas or risk a lack of industry acceptance. The business case analysis consistently indicated lower costs for the proposed future architectures, and in most cases, significantly so. The proposed future solutions were found to offer significantly greater functionality, flexibility, and growth potential over time, at lower cost, than current systems. This was true for overall, fleet-wide equipage for domestic and oceanic air carriers, as well as for single, General Aviation (GA) aircraft. The overall research results indicate that all identified requirements can be met by the proposed solutions with significant capacity for future growth. Results also illustrate that the majority of the future communication needs can be met using currently allocated aviation RF spectrum, if used in more effective ways than it is today. A combination of such optimized aviation-specific links and commercial communication systems meets all identified needs for the 50-year future and beyond, with the caveat that a new, overall function will be needed to manage all information exchange, individual links, security, cost, and other factors. This function was labeled "Delivery Manager" (DM) within this research. DM employs a distributed client/server architecture, for both airborne and ground communications architectures. Final research results included identifying the most promising candidate technologies for the future system, conclusions and recommendations, and identifying areas where further research should be considered.
Solving bi-level optimization problems in engineering design using kriging models
NASA Astrophysics Data System (ADS)
Xia, Yi; Liu, Xiaojie; Du, Gang
2018-05-01
Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.
NASA Astrophysics Data System (ADS)
Kazemzadeh Azad, Saeid
2018-01-01
In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient metaheuristic optimization of large-scale structural systems.
Order management empowering entrepreneurial partnerships in the context of new technologies
NASA Astrophysics Data System (ADS)
Tămăşilă, M.; Proştean, G.; Diaconescu, A.
2018-01-01
The expansiveness of latest generation technologies triggers manufacturers from different industry sectors more complex situations in order management with various loyal customers and occasional customers. More specifically, orders variations in logistics chain make it difficult to achieve entrepreneurial partnerships in the context of new technologies integrated into automotive and wind industry processes, which hinders getting major investments. Within this framework, the research team investigates the bottlenecks in the supply chain and indicates some rules and methods to solve the desynchronizations and fluctuations caused by the constraints of cutting-edge technologies. The paper aims to solve order management problems based on both an algorithm and an implementation in SAP. Also, in the paper, a conceptual model is created for the user whose basic task is the management of the entrepreneurial orders. Solutions identified based on the algorithm offers an order management plan by optimally adjusting inventories to deal with any kind of orders, thus achieving a profitable entrepreneurial approach between the two partners.
NASA Astrophysics Data System (ADS)
Lebedev, V. A.; Serga, G. V.; Khandozhko, A. V.
2018-03-01
The article proposes technical solutions for increasing the efficiency of finishing-cleaning and hardening processing of parts on the basis of rotor-screw technological systems. The essence, design features and technological capabilities of the rotor-screw technological system with a rotating container are disclosed, which allows one to expand the range of the resulting displacement vectors, granules of the abrasive medium and processed parts. Ways of intensification of the processing on their basis by means of vibration activation of the process providing a combined effect on the mass of loading of large and small amplitude low-frequency oscillations are proposed. The results of the experimental studies of the movement of bulk materials in a screw container are presented, which showed that Kv = 0.5-0.6 can be considered the optimal value of the container filling factor. The estimation of screw containers application efficiency proceeding from their design features is given.
Implications of intelligent, integrated microsystems for product design and development
DOE Office of Scientific and Technical Information (OSTI.GOV)
MYERS,DAVID R.; MCWHORTER,PAUL J.
2000-04-19
Intelligent, integrated microsystems combine some or all of the functions of sensing, processing information, actuation, and communication within a single integrated package, and preferably upon a single silicon chip. As the elements of these highly integrated solutions interact strongly with each other, the microsystem can be neither designed nor fabricated piecemeal, in contrast to the more familiar assembled products. Driven by technological imperatives, microsystems will best be developed by multi-disciplinary teams, most likely within the flatter, less hierarchical organizations. Standardization of design and process tools around a single, dominant technology will expedite economically viable operation under a common production infrastructure.more » The production base for intelligent, integrated microsystems has elements in common with the mathematical theory of chaos. Similar to chaos theory, the development of microsystems technology will be strongly dependent on, and optimized to, the initial product requirements that will drive standardization--thereby further rewarding early entrants to integrated microsystem technology.« less
Improved Low-Temperature Performance of Li-Ion Cells Using New Electrolytes
NASA Technical Reports Server (NTRS)
Smart, Marshall C.; Buga, Ratnakumar V.; Gozdz, Antoni S.; Mani, Suresh
2010-01-01
As part of the continuing efforts to develop advanced electrolytes to improve the performance of lithium-ion cells, especially at low temperatures, a number of electrolyte formulations have been developed that result in improved low-temperature performance (down to 60 C) of 26650 A123Systems commercial lithium-ion cells. The cell type/design, in which the new technology has been demonstrated, has found wide application in the commercial sector (i.e., these cells are currently being used in commercial portable power tools). In addition, the technology is actively being considered for hybrid electric vehicle (HEV) and electric vehicle (EV) applications. In current work, a number of low-temperature electrolytes have been developed based on advances involving lithium hexafluorophosphate-based solutions in carbonate and carbonate + ester solvent blends, which have been further optimized in the context of the technology and targeted applications. The approaches employed, which include the use of ternary mixtures of carbonates, the use of ester co-solvents [e.g., methyl butyrate (MB)], and optimized lithium salt concentrations (e.g., LiPF6), were compared with the commercial baseline electrolyte, as well as an electrolyte being actively considered for DoE HEV applications and previously developed by a commercial enterprise, namely LiPF6 in ethylene carbonate (EC) + ethyl methyl carbonate (EMC)(30:70%).
Three-dimensional bioprinting in tissue engineering and regenerative medicine.
Gao, Guifang; Cui, Xiaofeng
2016-02-01
With the advances of stem cell research, development of intelligent biomaterials and three-dimensional biofabrication strategies, highly mimicked tissue or organs can be engineered. Among all the biofabrication approaches, bioprinting based on inkjet printing technology has the promises to deliver and create biomimicked tissue with high throughput, digital control, and the capacity of single cell manipulation. Therefore, this enabling technology has great potential in regenerative medicine and translational applications. The most current advances in organ and tissue bioprinting based on the thermal inkjet printing technology are described in this review, including vasculature, muscle, cartilage, and bone. In addition, the benign side effect of bioprinting to the printed mammalian cells can be utilized for gene or drug delivery, which can be achieved conveniently during precise cell placement for tissue construction. With layer-by-layer assembly, three-dimensional tissues with complex structures can be printed using converted medical images. Therefore, bioprinting based on thermal inkjet is so far the most optimal solution to engineer vascular system to the thick and complex tissues. Collectively, bioprinting has great potential and broad applications in tissue engineering and regenerative medicine. The future advances of bioprinting include the integration of different printing mechanisms to engineer biphasic or triphasic tissues with optimized scaffolds and further understanding of stem cell biology.
Jiang, Yaxin; Liang, Jiaming; Liu, Yang
2016-01-01
The extraction process used to obtain bitumen from the oil sands produces large volumes of oil sands process-affected water (OSPW). As a newly emerging desalination technology, forward osmosis (FO) has shown great promise in saving electrical power requirements, increasing water recovery, and minimizing brine discharge. With the support of this funding, a FO system was constructed using a cellulose triacetate FO membrane to test the feasibility of OSPW desalination and contaminant removal. The FO systems were optimized using different types and concentrations of draw solution. The FO system using 4 M NH4HCO3 as a draw solution achieved 85% water recovery from OSPW, and 80 to 100% contaminant rejection for most metals and ions. A water backwash cleaning method was applied to clean the fouled membrane, and the cleaned membrane achieved 77% water recovery, a performance comparable to that of new FO membranes. This suggests that the membrane fouling was reversible. The FO system developed in this project provides a novel and energy efficient strategy to remediate the tailings waters generated by oil sands bitumen extraction and processing.
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537
Malikopoulos, Andreas
2015-01-01
The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stepinski, Dominique C.; Abdul, Momen; Youker, Amanda J.
2016-06-01
Argonne National Laboratory has developed a Mo-recovery and -purification system for the SHINE medical technologies process, which uses a uranyl sulfate solution for the accelerator-driven production of Mo-99. The objective of this effort is to reduce the processing time for the acidification of the Mo-99 product prior to loading onto a concentration column and concentration of the Mo-99 product solution. Two methods were investigated: (1) the replacement of the titania concentration column by an anion-exchange column to decrease processing time and increase the radioiodine-decontamination efficiency and (2) pretreatment of the titania sorbent to improve its effectiveness for the Mo-recovery andmore » -concentration columns. Promising results are reported for both methods.« less
Facilitating eHealth for All Through Connecting Nurses and the Women Observatory for eHealth.
Thouvenot, Veronique Ines; Hardiker, Nicholas
2016-01-01
Nurses are at the forefront of health care delivery and are key to health improvements across populations worldwide. They play vital role in the treatment of communicable diseases and in maintaining optimal quality of life for those living with long-term conditions. A number of factors such as an ageing population, a shrinking nursing workforce, inequity and variable access to health services naturally point towards technology-focused solutions. However the uptake of eHealth tools and techniques by nurses and their integration with nursing practice remain patchy, not least because of nurses simply 'not knowing' that good solutions exist. The purpose of this panel is to describe initiatives that seek to identify and showcase good practice in the use of eHealth in nursing.
Liu, Lu; Masfary, Osama; Antonopoulos, Nick
2012-01-01
The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions.
Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body
NASA Astrophysics Data System (ADS)
Wang, Xijing; Li, Jisheng
With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.
An approach of traffic signal control based on NLRSQP algorithm
NASA Astrophysics Data System (ADS)
Zou, Yuan-Yang; Hu, Yu
2017-11-01
This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.
Multi-objective Optimization of Pulsed Gas Metal Arc Welding Process Using Neuro NSGA-II
NASA Astrophysics Data System (ADS)
Pal, Kamal; Pal, Surjya K.
2018-05-01
Weld quality is a critical issue in fabrication industries where products are custom-designed. Multi-objective optimization results number of solutions in the pareto-optimal front. Mathematical regression model based optimization methods are often found to be inadequate for highly non-linear arc welding processes. Thus, various global evolutionary approaches like artificial neural network, genetic algorithm (GA) have been developed. The present work attempts with elitist non-dominated sorting GA (NSGA-II) for optimization of pulsed gas metal arc welding process using back propagation neural network (BPNN) based weld quality feature models. The primary objective to maintain butt joint weld quality is the maximization of tensile strength with minimum plate distortion. BPNN has been used to compute the fitness of each solution after adequate training, whereas NSGA-II algorithm generates the optimum solutions for two conflicting objectives. Welding experiments have been conducted on low carbon steel using response surface methodology. The pareto-optimal front with three ranked solutions after 20th generations was considered as the best without further improvement. The joint strength as well as transverse shrinkage was found to be drastically improved over the design of experimental results as per validated pareto-optimal solutions obtained.
Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.
Deb, Kalyanmoy; Sinha, Ankur
2010-01-01
Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.
MEMS packaging: state of the art and future trends
NASA Astrophysics Data System (ADS)
Bossche, Andre; Cotofana, Carmen V. B.; Mollinger, Jeff R.
1998-07-01
Now that the technology for Integrated sensor and MEMS devices has become sufficiently mature to allow mass production, it is expected that the prices of bare chips will drop dramatically. This means that the package prices will become a limiting factor in market penetration, unless low cost packaging solutions become available. This paper will discuss the developments in packaging technology. Both single-chip and multi-chip packaging solutions will be addressed. It first starts with a discussion on the different requirements that have to be met; both from a device point of view (open access paths to the environment, vacuum cavities, etc.) and from the application point of view (e.g. environmental hostility). Subsequently current technologies are judged on their applicability for MEMS and sensor packaging and a forecast is given for future trends. It is expected that the large majority of sensing devices will be applied in relative friendly environments for which plastic packages would suffice. Therefore, on the short term an important role is foreseen for recently developed plastic packaging techniques such as precision molding and precision dispensing. Just like in standard electronic packaging, complete wafer level packaging methods for sensing devices still have a long way to go before they can compete with the highly optimized and automated plastic packaging processes.
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wei, Xiao-Ran; Zhang, Yu-He; Geng, Guo-Hua
2016-09-01
In this paper, we examined how printing the hollow objects without infill via fused deposition modeling, one of the most widely used 3D-printing technologies, by partitioning the objects to shell parts. More specifically, we linked the partition to the exact cover problem. Given an input watertight mesh shape S, we developed region growing schemes to derive a set of surfaces that had inside surfaces that were printable without support on the mesh for the candidate parts. We then employed Monte Carlo tree search over the candidate parts to obtain the optimal set cover. All possible candidate subsets of exact cover from the optimal set cover were then obtained and the bounded tree was used to search the optimal exact cover. We oriented each shell part to the optimal position to guarantee the inside surface was printed without support, while the outside surface was printed with minimum support. Our solution can be applied to a variety of models, closed-hollowed or semi-closed, with or without holes, as evidenced by experiments and performance evaluation on our proposed algorithm.
Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.; Guynn, Mark D.
2012-01-01
Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.
Bounding the Spacecraft Atmosphere Design Space for Future Exploration Missions
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Perka, Alan T.; Duffield, Bruce E.; Jeng, Frank F.
2005-01-01
The selection of spacecraft and space suit atmospheres for future human space exploration missions will play an important, if not critical, role in the ultimate safety, productivity, and cost of such missions. Internal atmosphere pressure and composition (particularly oxygen concentration) influence many aspects of spacecraft and space suit design, operation, and technology development. Optimal atmosphere solutions must be determined by iterative process involving research, design, development, testing, and systems analysis. A necessary first step in this process is the establishment of working bounds on the atmosphere design space.
Modelling and optimization of rotary parking system
NASA Astrophysics Data System (ADS)
Skrzyniowski, A.
2016-09-01
The increasing number of vehicles in cities is a cause of traffic congestion which interrupts the smooth traffic flow. The established EU policy underlines the importance of restoring spaces for pedestrian traffic and public communication. The overall vehicle parking process in some parts of a city takes so much time that it has a negative impact on the environment. This article presents different kinds of solution with special focus on the rotary parking system (PO). This article is based on a project realized at the Faculty of Mechanical Engineering of Cracow University of Technology.
Multiple estimation channel decoupling and optimization method based on inverse system
NASA Astrophysics Data System (ADS)
Wu, Peng; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng
2018-03-01
This paper addressed the intelligent autonomous navigation request of intelligent deformation missile, based on the intelligent deformation missile dynamics and kinematics modeling, navigation subsystem solution method and error modeling, and then focuses on the corresponding data fusion and decision fusion technology, decouples the sensitive channel of the filter input through the inverse system of design dynamics to reduce the influence of sudden change of the measurement information on the filter input. Then carrying out a series of simulation experiments, which verified the feasibility of the inverse system decoupling algorithm effectiveness.
Integrated Artificial Intelligence Approaches for Disease Diagnostics.
Vashistha, Rajat; Chhabra, Deepak; Shukla, Pratyoosh
2018-06-01
Mechanocomputational techniques in conjunction with artificial intelligence (AI) are revolutionizing the interpretations of the crucial information from the medical data and converting it into optimized and organized information for diagnostics. It is possible due to valuable perfection in artificial intelligence, computer aided diagnostics, virtual assistant, robotic surgery, augmented reality and genome editing (based on AI) technologies. Such techniques are serving as the products for diagnosing emerging microbial or non microbial diseases. This article represents a combinatory approach of using such approaches and providing therapeutic solutions towards utilizing these techniques in disease diagnostics.
Solutions for medical databases optimal exploitation.
Branescu, I; Purcarea, V L; Dobrescu, R
2014-03-15
The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.
NASA Technical Reports Server (NTRS)
Gardner, J. E.; Dixon, S. C.
1984-01-01
Research was done in the following areas: development and validation of solution algorithms, modeling techniques, integrated finite elements for flow-thermal-structural analysis and design, optimization of aircraft and spacecraft for the best performance, reduction of loads and increase in the dynamic structural stability of flexible airframes by the use of active control, methods for predicting steady and unsteady aerodynamic loads and aeroelastic characteristics of flight vehicles with emphasis on the transonic range, and methods for predicting and reducing helicoper vibrations.
NASA Technical Reports Server (NTRS)
Dobrinskaya, Tatiana
2015-01-01
This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command errors. The suggested analytical solution provides a new method of maneuver optimization which is less complicated, automatic and more universal. A maneuver optimization approach, presented in this paper, can be used not only for the ISS, but for other orbiting space vehicles.
Finite element approximation of an optimal control problem for the von Karman equations
NASA Technical Reports Server (NTRS)
Hou, L. Steven; Turner, James C.
1994-01-01
This paper is concerned with optimal control problems for the von Karman equations with distributed controls. We first show that optimal solutions exist. We then show that Lagrange multipliers may be used to enforce the constraints and derive an optimality system from which optimal states and controls may be deduced. Finally we define finite element approximations of solutions for the optimality system and derive error estimates for the approximations.
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.
The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C. Lindsay; Zéphyr, Luckny; Liu, Jialin
The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewablesmore » and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.« less
An Algorithm for the Mixed Transportation Network Design Problem
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803
Optimal configuration of power grid sources based on optimal particle swarm algorithm
NASA Astrophysics Data System (ADS)
Wen, Yuanhua
2018-04-01
In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.
A preliminary study to metaheuristic approach in multilayer radiation shielding optimization
NASA Astrophysics Data System (ADS)
Arif Sazali, Muhammad; Rashid, Nahrul Khair Alang Md; Hamzah, Khaidzir
2018-01-01
Metaheuristics are high-level algorithmic concepts that can be used to develop heuristic optimization algorithms. One of their applications is to find optimal or near optimal solutions to combinatorial optimization problems (COPs) such as scheduling, vehicle routing, and timetabling. Combinatorial optimization deals with finding optimal combinations or permutations in a given set of problem components when exhaustive search is not feasible. A radiation shield made of several layers of different materials can be regarded as a COP. The time taken to optimize the shield may be too high when several parameters are involved such as the number of materials, the thickness of layers, and the arrangement of materials. Metaheuristics can be applied to reduce the optimization time, trading guaranteed optimal solutions for near-optimal solutions in comparably short amount of time. The application of metaheuristics for radiation shield optimization is lacking. In this paper, we present a review on the suitability of using metaheuristics in multilayer shielding design, specifically the genetic algorithm and ant colony optimization algorithm (ACO). We would also like to propose an optimization model based on the ACO method.
Singular perturbation analysis of AOTV-related trajectory optimization problems
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Bae, Gyoung H.
1990-01-01
The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.
Optimal Solution for an Engineering Applications Using Modified Artificial Immune System
NASA Astrophysics Data System (ADS)
Padmanabhan, S.; Chandrasekaran, M.; Ganesan, S.; patan, Mahamed Naveed Khan; Navakanth, Polina
2017-03-01
An Engineering optimization leads a essential role in several engineering application areas like process design, product design, re-engineering and new product development, etc. In engineering, an awfully best answer is achieved by comparison to some completely different solutions by utilization previous downside information. An optimization algorithms provide systematic associate degreed economical ways that within which of constructing and comparison new design solutions so on understand at best vogue, thus on best solution efficiency and acquire the foremost wonderful design impact. In this paper, a new evolutionary based Modified Artificial Immune System (MAIS) algorithm used to optimize an engineering application of gear drive design. The results are compared with existing design.
McTwo: a two-step feature selection algorithm based on maximal information coefficient.
Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng
2016-03-23
High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.
Transient Thermoelectric Solution Employing Green's Functions
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
The study works to formulate convenient solutions to the problem of a thermoelectric couple operating under a time varying condition. Transient operation of a thermoelectric will become increasingly common as thermoelectric technology permits applications in an increasing number of uses. A number of terrestrial applications, in contrast to steady-state space applications, can subject devices to time varying conditions. For instance thermoelectrics can be exposed to transient conditions in the automotive industry depending on engine system dynamics along with factors like driving style. In an effort to generalize the thermoelectric solution a Greens function method is used, so that arbitrary time varying boundary and initial conditions may be applied to the system without reformulation. The solution demonstrates that in thermoelectric applications of a transient nature additional factors must be taken into account and optimized. For instance, the materials specific heat and density become critical parameters in addition to the thermal mass of a heat sink or the details of the thermal profile, such as oscillating frequency. The calculations can yield the optimum operating conditions to maximize power output andor efficiency for a given type of device.
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
Designing and optimizing a healthcare kiosk for the community.
Lyu, Yongqiang; Vincent, Christopher James; Chen, Yu; Shi, Yuanchun; Tang, Yida; Wang, Wenyao; Liu, Wei; Zhang, Shuangshuang; Fang, Ke; Ding, Ji
2015-03-01
Investigating new ways to deliver care, such as the use of self-service kiosks to collect and monitor signs of wellness, supports healthcare efficiency and inclusivity. Self-service kiosks offer this potential, but there is a need for solutions to meet acceptable standards, e.g. provision of accurate measurements. This study investigates the design and optimization of a prototype healthcare kiosk to collect vital signs measures. The design problem was decomposed, formalized, focused and used to generate multiple solutions. Systematic implementation and evaluation allowed for the optimization of measurement accuracy, first for individuals and then for a population. The optimized solution was tested independently to check the suitability of the methods, and quality of the solution. The process resulted in a reduction of measurement noise and an optimal fit, in terms of the positioning of measurement devices. This guaranteed the accuracy of the solution and provides a general methodology for similar design problems. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Optimal percolation on multiplex networks.
Osat, Saeed; Faqeeh, Ali; Radicchi, Filippo
2017-11-16
Optimal percolation is the problem of finding the minimal set of nodes whose removal from a network fragments the system into non-extensive disconnected clusters. The solution to this problem is important for strategies of immunization in disease spreading, and influence maximization in opinion dynamics. Optimal percolation has received considerable attention in the context of isolated networks. However, its generalization to multiplex networks has not yet been considered. Here we show that approximating the solution of the optimal percolation problem on a multiplex network with solutions valid for single-layer networks extracted from the multiplex may have serious consequences in the characterization of the true robustness of the system. We reach this conclusion by extending many of the methods for finding approximate solutions of the optimal percolation problem from single-layer to multiplex networks, and performing a systematic analysis on synthetic and real-world multiplex networks.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
Efficient and secure outsourcing of genomic data storage.
Sousa, João Sá; Lefebvre, Cédric; Huang, Zhicong; Raisaro, Jean Louis; Aguilar-Melchor, Carlos; Killijian, Marc-Olivier; Hubaux, Jean-Pierre
2017-07-26
Cloud computing is becoming the preferred solution for efficiently dealing with the increasing amount of genomic data. Yet, outsourcing storage and processing sensitive information, such as genomic data, comes with important concerns related to privacy and security. This calls for new sophisticated techniques that ensure data protection from untrusted cloud providers and that still enable researchers to obtain useful information. We present a novel privacy-preserving algorithm for fully outsourcing the storage of large genomic data files to a public cloud and enabling researchers to efficiently search for variants of interest. In order to protect data and query confidentiality from possible leakage, our solution exploits optimal encoding for genomic variants and combines it with homomorphic encryption and private information retrieval. Our proposed algorithm is implemented in C++ and was evaluated on real data as part of the 2016 iDash Genome Privacy-Protection Challenge. Results show that our solution outperforms the state-of-the-art solutions and enables researchers to search over millions of encrypted variants in a few seconds. As opposed to prior beliefs that sophisticated privacy-enhancing technologies (PETs) are unpractical for real operational settings, our solution demonstrates that, in the case of genomic data, PETs are very efficient enablers.
State-of-The-Art of Modeling Methodologies and Optimization Operations in Integrated Energy System
NASA Astrophysics Data System (ADS)
Zheng, Zhan; Zhang, Yongjun
2017-08-01
Rapid advances in low carbon technologies and smart energy communities are reshaping future patterns. Uncertainty in energy productions and demand sides are paving the way towards decentralization management. Current energy infrastructures could not meet with supply and consumption challenges, along with emerging environment and economic requirements. Integrated Energy System(IES) whereby electric power, natural gas, heating couples with each other demonstrates that such a significant technique would gradually become one of main comprehensive and optimal energy solutions with high flexibility, friendly renewables absorption and improving efficiency. In these global energy trends, we summarize this literature review. Firstly the accurate definition and characteristics of IES have been presented. Energy subsystem and coupling elements modeling issues are analyzed. It is pointed out that decomposed and integrated analysis methods are the key algorithms for IES optimization operations problems, followed by exploring the IES market mechanisms. Finally several future research tendencies of IES, such as dynamic modeling, peer-to-peer trading, couple market design, sare under discussion.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Wang, Yan; Xi, Chengyu; Zhang, Shuai; Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach.
Improving compliance in remote healthcare systems through smartphone battery optimization.
Alshurafa, Nabil; Eastwood, Jo-Ann; Nyamathi, Suneil; Liu, Jason J; Xu, Wenyao; Ghasemzadeh, Hassan; Pourhomayoun, Mohammad; Sarrafzadeh, Majid
2015-01-01
Remote health monitoring (RHM) has emerged as a solution to help reduce the cost burden of unhealthy lifestyles and aging populations. Enhancing compliance to prescribed medical regimens is an essential challenge to many systems, even those using smartphone technology. In this paper, we provide a technique to improve smartphone battery consumption and examine the effects of smartphone battery lifetime on compliance, in an attempt to enhance users' adherence to remote monitoring systems. We deploy WANDA-CVD, an RHM system for patients at risk of cardiovascular disease (CVD), using a wearable smartphone for detection of physical activity. We tested the battery optimization technique in an in-lab pilot study and validated its effects on compliance in the Women's Heart Health Study. The battery optimization technique enhanced the battery lifetime by 192% on average, resulting in a 53% increase in compliance in the study. A system like WANDA-CVD can help increase smartphone battery lifetime for RHM systems monitoring physical activity.
Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems
NASA Astrophysics Data System (ADS)
Vasant, P.; Barsoum, N.
2010-06-01
In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.
Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach. PMID:26147468
Optimized positioning of autonomous surgical lamps
NASA Astrophysics Data System (ADS)
Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel
2017-03-01
We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
Design of a backlighting structure for very large-area luminaries
NASA Astrophysics Data System (ADS)
Carraro, L.; Mäyrä, A.; Simonetta, M.; Benetti, G.; Tramonte, A.; Benedetti, M.; Randone, E. M.; Ylisaukko-Oja, A.; Keränen, K.; Facchinetti, T.; Giuliani, G.
2017-02-01
A novel approach for RGB semiconductor LED-based backlighting system is developed to satisfy the requirements of the Project LUMENTILE funded by the European Commission, whose scope is to develop a luminous electronic tile that is foreseen to be manufactured in millions of square meters each year. This unconventionally large-area surface of uniform, high-brightness illumination requires a specific optical design to keep a low production cost, while maintaining high optical extraction efficiency and a reduced thickness of the structure, as imposed by architectural design constraints. The proposed solution is based on a light-guiding layer to be illuminated by LEDs in edge configuration, or in a planar arrangement. The light guiding slab is finished with a reflective top interface and a diffusive or reflective bottom interface/layer. Patterning is used for both the top interface (punctual removal of reflection and generation of a light scattering centers) and for the bottom layer (using dark/bright printed pattern). Computer-based optimization algorithms based on ray-tracing are used to find optimal solutions in terms of uniformity of illumination of the top surface and overall light extraction efficiency. Through a closed-loop optimization process, that assesses the illumination uniformity of the top surface, the algorithm generates the desired optimized top and bottom patterns, depending on the number of LED sources used, their geometry, and the thickness of the guiding layer. Specific low-cost technologies to realize the patterning are discussed, with the goal of keeping the production cost of these very large-area luminaries below the value of 100$/sqm.
USDA-ARS?s Scientific Manuscript database
Solution blow spinning (SBS) is a process to produce non-woven fiber sheets with high porosity and an extremely large amount of surface area. In this study, a Box-Behnken experimental design (BBD) was used to optimize the processing parameters for the production of nanofibers from polymer solutions ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-12
... Business Technology Solutions Central Security Services Iselin, New Jersey; TA-W-82,634B, Prudential Global Business Technology Solutions Central Security Services Plymouth, Minnesota; TA- W-82,634C, Prudential Global Business Technology Solutions Central Security Services Scottsdale, Arizona; TA-W-82,634D...
Development of a Tunable Electromechanical Acoustic Liner for Engine Nacelles
NASA Technical Reports Server (NTRS)
Liu, Fei; Sheplak, Mark; Cattafesta, Louis N., III
2007-01-01
This report describes the development of a tunable electromechanical Helmholtz resonator (EMHR) for engine nacelles using smart materials technology. This effort addresses both near-term and long-term goals for tunable electromechanical acoustic liner technology for the Quiet Aircraft Technology (QAT) Program. Analytical models, i.e. lumped element model (LEM) and transfer matrix (TM) representation of the EMHR, have been developed to predict the acoustic behavior of the EMHR. The models have been implemented in a MATLAB program and used to compare with measurement results. Moreover, the prediction performance of models is further improved with the aid of parameter extraction of the piezoelectric backplate. The EMHR has been experimentally investigated using standard two-microphone method (TMM). The measurement results validated both the LEM and TM models of the EMHR. Good agreement between predicted and measured impedance is obtained. Short- and open circuit loads define the limits of the tuning range using resistive and capacitive loads. There is approximately a 9% tuning limit under these conditions for the non-optimized resonator configuration studied. Inductive shunt loads result in a 3 degree-of-freedom DOF) system and an enhanced tuning range of over 20% that is not restricted by the short- and open-circuit limits. Damping coefficient ' measurements for piezoelectric backplates in a vacuum chamber are also performed and indicate that the damping is dominated by the structural damping losses, such as compliant boundaries, and other intrinsic loss mechanisms. Based on models of the EMHR, a Pareto optimization design of the EMHR has been performed for the EMHR with non-inductive loads. The EMHR with non-inductive loads is a 2DOF system with two resonant fiequencies. The tuning ranges of the two resonant frequencies of the EMHR with non-inductive loads cannot be optimized simultaneously; a trade-off (i.e., a Pareto solution) must be reached. The Pareto solution provides the information for a designer that shows how design trade-offs can be used to satisfy specific design requirements. The optimization design of the EMHR with inductive loads aims at optimal tuning of these three resonant fiequencies. The results indicate that it is possible to keep the acoustic reactance of the resonator close to a constant over a given frequency range. An effort to mimic the second layer of the NASA 2DOF liner using a piezoelectric composite diaphragm has been made. The optimal acoustic reactance of the second layer of the NASA 2DOF liner is achieved using a thin PVDF composite diaphragm, but matching the acoustic resistance requires further investigation. Acoustic energy harvesting is achieved by connecting the EMHR to an energy reclamation circuit that converts the ac voltage signal across the piezoceramic to a conditioned dc signal. Energy harvesting experiment yields 16 m W continuous power for an incident SPL of 153 dB. Such a level is sufficient to power a variety of low power electronic devices. Finally, technology transfer has been achieved by converting the original NASA ZKTL FORTRAN code to a MATLAB code while incorporating the models of the EMHR. Initial studies indicate that the EMHR is a promising technology that may enable lowpower, light weight, tunable engine nacelle liners. This technology, however, is very immature, and additional developments are required. Recommendations for future work include testing of sample EMHR liner designs in NASA Langley s normal incidence dual-waveguide and the grazing-incidence flow facility to evaluating both the impedance characteristics as well as the energy reclamation abilities. Additional design work is required for more complex tuning circuits with greater performance. Poor electromechanical coupling limited the electromechanical tuning capabilities of the proof of concept EMHR. Different materials than those studies and perhaps novel composite material systems may dramatically improvehe electromechanical coupling. Such improvements are essential to improved mimicking of existing double layer liners.
Robot Manipulator Technologies for Planetary Exploration
NASA Technical Reports Server (NTRS)
Das, H.; Bao, X.; Bar-Cohen, Y.; Bonitz, R.; Lindemann, R.; Maimone, M.; Nesnas, I.; Voorhees, C.
1999-01-01
NASA exploration missions to Mars, initiated by the Mars Pathfinder mission in July 1997, will continue over the next decade. The missions require challenging innovations in robot design and improvements in autonomy to meet ambitious objectives under tight budget and time constraints. The authors are developing design tools, component technologies and capabilities to address these needs for manipulation with robots for planetary exploration. The specific developments are: 1) a software analysis tool to reduce robot design iteration cycles and optimize on design solutions, 2) new piezoelectric ultrasonic motors (USM) for light-weight and high torque actuation in planetary environments, 3) use of advanced materials and structures for strong and light-weight robot arms and 4) intelligent camera-image coordinated autonomous control of robot arms for instrument placement and sample acquisition from a rover vehicle.
Gradient and shim technologies for ultra high field MRI
Winkler, Simone A.; Schmitt, Franz; Landes, Hermann; DeBever, Josh; Wade, Trevor; Alejski, Andrew
2017-01-01
Ultra High Field (UHF) MRI requires improved gradient and shim performance to fully realize the promised gains (SNR as well as spatial, spectral, diffusion resolution) that higher main magnetic fields offer. Both the more challenging UHF environment by itself, as well as the higher currents used in high performance coils, require a deeper understanding combined with sophisticated engineering modeling and construction, to optimize gradient and shim hardware for safe operation and for highest image quality. This review summarizes the basics of gradient and shim technologies, and outlines a number of UHF-related challenges and solutions. In particular, Lorentz forces, vibroacoustics, eddy currents, and peripheral nerve stimulation are discussed. Several promising UHF-relevant gradient concepts are described, including insertable gradient coils aimed at higher performance neuroimaging. PMID:27915120
Exploring the quantum speed limit with computer games
NASA Astrophysics Data System (ADS)
Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.
2016-04-01
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Exploring the quantum speed limit with computer games.
Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F
2016-04-14
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
NASA Astrophysics Data System (ADS)
Clemens, Joshua William
Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.
Chen, Ming-Xia; Zhang, Jian-Bao; Yu, Ji-Ping; Ye, Jing; Wei, Bao-Hong; Zhang, Yu-Jie
2013-06-01
To optimize the freeze-dried powder preparation technology of recombinate hirudin-2 (rHV2) nanoparticle which has bio-adhesive characteristic for nasal delivery, also to investigate its stability and permeability through nasal membrane in vitro. Taking the appearance, rediffusion of nanoparticle and rHV2 encapsulation efficiency as the evaluation indexes. Cryoprotector, the preparative technique and the effect of illumination and high temperature factors on its stability for rHV2 freeze-dried powder were investigated. Using Fraze diffusion cell technique, the permeability of rHV2 across rabbit nasal mucous membrane in chitosan solution, chitosan nanoparticle, and nanoparticle frozen-dried powder were compared with that in normal saline solution. The optimized preparation of rHV2 nanoparticle freeze-dried powder was as follows: 5% trehalose and glucose (1:1) was used as cryoprotector, nanoparticle solution was freezed for 24 h in vacuum frozen-dryer after being pre-freezed for 24 h. The content of rHV2 in the freeze-dried powder was 1.1 ug/mg. Illumination had little effect on the appearance, rediffusion and encapsulation efficiency of the rHV2 freeze-dried powder. High temperature could obviously influence the appearance of nanoparticle freeze-dried powder. The permeability coefficient (P) of nanoparticle was 5 times more than that in chictonson solution. It was indicated that chitosan nanoparticle has effect on increasing the permeability of rHV2. The freeze-dried powder of chitosan nanoparticle can be a good nasal preparation of rHV2.
Toward an optimal online checkpoint solution under a two-level HPC checkpoint model
Di, Sheng; Robert, Yves; Vivien, Frederic; ...
2016-03-29
The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less
Local performance optimization for a class of redundant eight-degree-of-freedom manipulators
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1994-01-01
Local performance optimization for joint limit avoidance and manipulability maximization (singularity avoidance) is obtained by using the Jacobian matrix pseudoinverse and by projecting the gradient of an objective function into the Jacobian null space. Real-time redundancy optimization control is achieved for an eight-joint redundant manipulator having a three-axis spherical shoulder, a single elbow joint, and a four-axis spherical wrist. Symbolic solutions are used for both full-Jacobian and wrist-partitioned pseudoinverses, partitioned null-space projection matrices, and all objective function gradients. A kinematic limitation of this class of manipulators and the limitation's effect on redundancy resolution are discussed. Results obtained with graphical simulation are presented to demonstrate the effectiveness of local redundant manipulator performance optimization. Actual hardware experiments performed to verify the simulated results are also discussed. A major result is that the partitioned solution is desirable because of low computation requirements. The partitioned solution is suboptimal compared with the full solution because translational and rotational terms are optimized separately; however, the results show that the difference is not significant. Singularity analysis reveals that no algorithmic singularities exist for the partitioned solution. The partitioned and full solutions share the same physical manipulator singular conditions. When compared with the full solution, the partitioned solution is shown to be ill-conditioned in smaller neighborhoods of the shared singularities.
New trends in astrodynamics and applications: optimal trajectories for space guidance.
Azimov, Dilmurat; Bishop, Robert
2005-12-01
This paper represents recent results on the development of optimal analytic solutions to the variation problem of trajectory optimization and their application in the construction of on-board guidance laws. The importance of employing the analytically integrated trajectories in a mission design is discussed. It is assumed that the spacecraft is equipped with a power-limited propulsion and moving in a central Newtonian field. Satisfaction of the necessary and sufficient conditions for optimality of trajectories is analyzed. All possible thrust arcs and corresponding classes of the analytical solutions are classified based on the propulsion system parameters and performance index of the problem. The solutions are presented in a form convenient for applications in escape, capture, and interorbital transfer problems. Optimal guidance and neighboring optimal guidance problems are considered. It is shown that the analytic solutions can be used as reference trajectories in constructing the guidance algorithms for the maneuver problems mentioned above. An illustrative example of a spiral trajectory that terminates on a given elliptical parking orbit is discussed.
Neural network for nonsmooth pseudoconvex optimization with general convex constraints.
Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping
2018-05-01
In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.
Integrated Arrival and Departure Schedule Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Xue, Min; Zelinski, Shannon
2014-01-01
In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.
ILP-based co-optimization of cut mask layout, dummy fill, and timing for sub-14nm BEOL technology
NASA Astrophysics Data System (ADS)
Han, Kwangsoo; Kahng, Andrew B.; Lee, Hyein; Wang, Lutong
2015-10-01
Self-aligned multiple patterning (SAMP), due to its low overlay error, has emerged as the leading option for 1D gridded back-end-of-line (BEOL) in sub-14nm nodes. To form actual routing patterns from a uniform "sea of wires", a cut mask is needed for line-end cutting or realization of space between routing segments. Constraints on cut shapes and minimum cut spacing result in end-of-line (EOL) extensions and non-functional (i.e. dummy fill) patterns; the resulting capacitance and timing changes must be consistent with signoff performance analyses and their impacts should be minimized. In this work, we address the co-optimization of cut mask layout, dummy fill, and design timing for sub-14nm BEOL design. Our central contribution is an optimizer based on integer linear programming (ILP) to minimize the timing impact due to EOL extensions, considering (i) minimum cut spacing arising in sub-14nm nodes; (ii) cut assignment to different cut masks (color assignment); and (iii) the eligibility to merge two unit-size cuts into a bigger cut. We also propose a heuristic approach to remove dummy fills after the ILP-based optimization by extending the usage of cut masks. Our heuristic can improve critical path performance under minimum metal density and mask density constraints. In our experiments, we study the impact of number of cut masks, minimum cut spacing and metal density under various constraints. Our studies of optimized cut mask solutions in these varying contexts give new insight into the tradeoff of performance and cost that is afforded by cut mask patterning technology options.
Issues for eHealth in Psychiatry: Results of an Expert Survey
Huckvale, Kit; Larsen, Mark Erik; Basu, Ashna; Batterham, Philip J; Shaw, Frances; Sendi, Shahbaz
2017-01-01
Background Technology has changed the landscape in which psychiatry operates. Effective, evidence-based treatments for mental health care are now available at the fingertips of anyone with Internet access. However, technological solutions for mental health are not necessarily sought by consumers nor recommended by clinicians. Objective The objectives of this study are to identify and discuss the barriers to introducing eHealth technology-supported interventions within mental health. Methods An interactive polling tool was used to ask “In this brave new world, what are the key issues that need to be addressed to improve mental health (using technology)?” Respondents were the multidisciplinary attendees of the “Humans and Machines: A Quest for Better Mental Health” conference, held in Sydney, Australia, in 2016. Responses were categorized into 10 key issues using team-based qualitative analysis. Results A total of 155 responses to the question were received from 66 audience members. Responses were categorized into 10 issues and ordered by importance: access to care, integration and collaboration, education and awareness, mental health stigma, data privacy, trust, understanding and assessment of mental health, government and policy, optimal design, and engagement. In this paper, each of the 10 issues are outlined, and potential solutions are discussed. Many of the issues were interrelated, having implications for other key areas identified. Conclusions As many of the issues identified directly related to barriers to care, priority should be given to addressing these issues that are common across mental health delivery. Despite new challenges raised by technology, technology-supported mental health interventions represent a tremendous opportunity to address in a timely way these major concerns and improve the receipt of effective, evidence-based therapy by those in need. PMID:28246068
Tang, Liyang
2013-04-04
The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.
Determination of transport properties and optimization of lithium-ion batteries
NASA Astrophysics Data System (ADS)
Stewart, Sarah Grace
We have adapted the method of restricted diffusion to measure diffusion coefficients in lithium-battery electrolytes using Ultra-Violent-Visible (UV-Vis) absorption. The use of UV-Vis absorption reduces the likelihood of side reactions. Here we describe the measurement of the diffusion coefficient in lithium-battery electrolytic solutions. The diffusion coefficient is seen to decrease with increasing concentration according to the following: D = 3.018·10-5 exp(-0.357c), for LiPF 6 in acetonitrile and D = 2.582·10-5 exp(-2.856c) for LiPF6 in EC:DEC (with D in cm2/s and c in moles per liter). This technique may be useful for any liquid solution with a UV-active species of D greater than 10-6 cm2/s. Activity coefficients were measured in concentration cell and melting-point-depression experiments. Results from concentration-cell experiments are presented for solutions of lithium hexafluorophosphate (LiPF6) in propylene carbonate (PC) as well as in a 1:1 by weight solution of ethylene carbonate (EC) and ethyl methyl carbonate (EMC). Heat capacity results are also presented. The thermodynamic factor of LiPF6 solutions in EC varies between ca. 1.33 and ca. 6.10 in the concentration range ca. 0.06 to 1.25 M (which appears to be a eutectic point). We show that the solutions of LiPF6 investigated are not ideal but that an assumption of ideality for these solutions may overestimate the specific energy of a lithium-ion cell by only 0.6%. The thermodynamic and transport properties that we have measured are used in a system model. We have used this model to optimize the design of an asymmetric-hybrid system. This technology attempts to bridge the gap in energy density between a battery and supercapacitor. In this system, the positive electrode stores charge through a reversible, nonfaradaic adsorption of anions on the surface. The negative electrode is nanostructured Li4Ti 5O12, which reversibly intercalates lithium. We use the properties that we have measured in a system model. The model is compared with experimental results. The performance of this system is compared with a lithium titanate spinet/lithium iron phosphate battery. Various electrolytes are considered: 1 M LiPF6 in ethylene carbonate and dimethyl carbonate (1:2 by weight---hereafter referred to as EC:DMC), 1 M LiBF4 in gamma-butyrolactone (GBL), or 1 M LiBF4 in acetonitrile (ACN). The model is used to study the performance of these chemistries and to assist in optimization of the cells. A Ragone plot is generated for various cell designs in order to assess the ability of the chemistries to achieve the U.S. Department of Energy goals for hybrid-electric vehicles. The model is used to maximize the specific energy of the cell by optimizing the design for a fixed time of discharge. The thickness and porosity of both electrodes are varied, while holding constant a capacity ratio of one for the two electrodes, as well as the properties of the separator, and the electrolyte. The influence of the capacity ratio is discussed. The capacity ratio can be optimized for each time of discharge. A 41% increase in power density is seen when one oversizes the iron phosphate electrode in a lithium titanate spinel/iron phosphate battery because the properties of this electrode are limiting. The optimization was performed for discharge times ranging from 3 h to 30 s in order to gauge the ability of this chemistry to be used in various applications. The optimized designs derived here can be used as a starting point for battery manufacturers and to help decrease the time to commercialization. We have further improved our model in order to simulate the hybrid-pulse-power-characterization (HPPC) test described by FreedomCAR. The simulation results presented in this section improve upon traditional Ragone plots by capturing the complexity of pulse performance during HEV operation. Lithium-ion batteries are capable of satisfying Department of Energy goals for pulse power and energy densities in hybrid electric vehicles (HEVs). We show that a 2.7 V electrochemical double-layer capacitor (EDLC) available today is unable to meet these goals. It would be necessary to increase the intrinsic capacitance by a factor of three, or to increase the voltage window to 3.7 V. We also investigate an asymmetric hybrid supercapacitor (a lithium titanate spinel/activated carbon system). We show that this technology, which has a higher available energy density than a traditional EDLC, may obtain 13 Wh/kg (without accounting for packaging weight) and has promise for meeting the demands of an HEV. While this technology nearly meets the FreedomCAR goal when accounting for packaging weight, it falls short of the energy-density capability of battery systems including LiMn2O 4, LiFePO4, and LiNi1/3Mn1/3Co 1/3, which are included for comparison in Chapter 5. These three battery chemistries can approach ca. 80, 100, and 140 Wh/kg respectively, when design optimized for FreedomCAR goals for the power-to-energy ratio for an HEV. (Abstract shortened by UMI.)
Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings
NASA Astrophysics Data System (ADS)
Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső
We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.
Carlos, Emanuel; Kiazadeh, Asal; Deuermeier, Jonas; Branquinho, Rita; Martins, Rodrigo; Fortunato, Elvira
2018-08-24
Lately, resistive switching memories (ReRAM) have been attracting a lot of attention due to their possibilities of fast operation, lower power consumption and simple fabrication process and they can also be scaled to very small dimensions. However, most of these ReRAM are produced by physical methods and nowadays the industry demands more simplicity, typically associated with low cost manufacturing. As such, ReRAMs in this work are developed from a solution-based aluminum oxide (Al 2 O 3 ) using a simple combustion synthesis process. The device performance is optimized by two-stage deposition of the Al 2 O 3 film. The resistive switching properties of the bilayer devices are reproducible with a yield of 100%. The ReRAM devices show unipolar resistive switching behavior with good endurance and retention time up to 10 5 s at 85 °C. The devices can be programmed in a multi-level cell operation mode by application of different reset voltages. Temperature analysis of various resistance states reveals a filamentary nature based on the oxygen vacancies. The optimized film was stacked between ITO and indium zinc oxide, targeting a fully transparent device for applications on transparent system-on-panel technology.
Comparison of Two Multidisciplinary Optimization Strategies for Launch-Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Powell, R. W.; Lepsch, R. A.; Stanley, D. O.; Kroo, I. M.
1995-01-01
The investigation focuses on development of a rapid multidisciplinary analysis and optimization capability for launch-vehicle design. Two multidisciplinary optimization strategies in which the analyses are integrated in different manners are implemented and evaluated for solution of a single-stage-to-orbit launch-vehicle design problem. Weights and sizing, propulsion, and trajectory issues are directly addressed in each optimization process. Additionally, the need to maintain a consistent vehicle model across the disciplines is discussed. Both solution strategies were shown to obtain similar solutions from two different starting points. These solutions suggests that a dual-fuel, single-stage-to-orbit vehicle with a dry weight of approximately 1.927 x 10(exp 5)lb, gross liftoff weight of 2.165 x 10(exp 6)lb, and length of 181 ft is attainable. A comparison of the two approaches demonstrates that treatment or disciplinary coupling has a direct effect on optimization convergence and the required computational effort. In comparison with the first solution strategy, which is of the general form typically used within the launch vehicle design community at present, the second optimization approach is shown to he 3-4 times more computationally efficient.
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2015-01-01
The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic algorithm in Level 2 to select the number of burns and their TIGs for the next generation. In this manner, the two levels solve their respective sub-problems separately but collaboratively until a burn solution is found that globally minimizes the deltaV across the entire trajectory. Feasible solutions can also be found by simply using the SQP algorithm in Level 1 with a zero cost function. This paper discusses the formulation of the Level 1 sub-problem and the development of a prototype software tool to solve it. The Level 2 sub-problem will be discussed in a future work. Following the Level 1 formulation and solution, several look-ahead trajectory examples for the ISS are explored. In each case, the burn targeting results using the current process are compared against a feasible solution found using Level 1 in the proposed technique. Level 1 is then used to find a minimum deltaV solution given the fixed number of burns and burn TIGs. The optimal solution is compared with the previously found feasible solution to determine the deltaV (and therefore propellant) savings. The proposed technique seeks to both improve the current process for targeting ISS burns, and to add the capability to optimize ISS burns in a novel fashion. The optimal solutions found using this technique can potentially save hundreds of kilograms of propellant over the course of the ISS mission compared to feasible solutions alone. While the software tool being developed to implement this technique is specific to ISS, the concept is extensible to other long-duration, central-body orbiting missions that must perform orbit maintenance burns to meet operational trajectory constraints.
Research on vehicle routing optimization for the terminal distribution of B2C E-commerce firms
NASA Astrophysics Data System (ADS)
Zhang, Shiyun; Lu, Yapei; Li, Shasha
2018-05-01
In this paper, we established a half open multi-objective optimization model for the vehicle routing problem of B2C (business-to-customer) E-Commerce firms. To minimize the current transport distance as well as the disparity between the excepted shipments and the transport capacity in the next distribution, we applied the concept of dominated solution and Pareto solutions to the standard particle swarm optimization and proposed a MOPSO (multi-objective particle swarm optimization) algorithm to support the model. Besides, we also obtained the optimization solution of MOPSO algorithm based on data randomly generated through the system, which verified the validity of the model.
A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization
Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long
2016-01-01
This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424
NASA Astrophysics Data System (ADS)
Patrizio, Piera; Leduc, Sylvain; Mesfun, Sennai; Yowargana, Ping; Kraxner, Florian
2017-04-01
The mitigation of adverse environmental impacts due to climate change requires the reduction of carbon dioxide emissions - also from the U.S. energy sector, a dominant source of greenhouse-gas emissions. This is especially true for the existing fleet of coal-fired power plants, accounting for roughly two-thirds of the U.S. energy sectors' total CO2 emissions. With this aim, different carbon mitigation options have been proposed in literature, such as increasing the energy efficiency, co-firing of biomass and/or the adoption of carbon capturing technologies (BECCS). However, the extent to which these solutions can be adopted depends on a suite of site specific factors and therefore needs to be evaluated on a site-specific basis. We propose a spatially explicit approach to identify candidate coal plants for which carbon capture technologies are economically feasible, according to different economic and policy frameworks. The methodology implies the adoption of IIASA's techno economic model BeWhere, which optimizes the cost of the entire BECCS supply chain, from the biomass resources to the storage of the CO2 in the nearest geological sink. The results shows that biomass co-firing appears to be the most appealing economic solution for a larger part of the existing U.S. coal fleet, while the adoption of CCS technologies is highly dependent on the level of CO2 prices as well as on local factors such as the type of coal firing technology and proximity of storage sites.
Fuel management optimization using genetic algorithms and code independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less
NASA Astrophysics Data System (ADS)
Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi
2017-07-01
The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.
Optimal four-impulse rendezvous between coplanar elliptical orbits
NASA Astrophysics Data System (ADS)
Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun
2011-04-01
Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.
LLIMAS: Revolutionizing integrating modeling and analysis at MIT Lincoln Laboratory
NASA Astrophysics Data System (ADS)
Doyle, Keith B.; Stoeckel, Gerhard P.; Rey, Justin J.; Bury, Mark E.
2017-08-01
MIT Lincoln Laboratory's Integrated Modeling and Analysis Software (LLIMAS) enables the development of novel engineering solutions for advanced prototype systems through unique insights into engineering performance and interdisciplinary behavior to meet challenging size, weight, power, environmental, and performance requirements. LLIMAS is a multidisciplinary design optimization tool that wraps numerical optimization algorithms around an integrated framework of structural, thermal, optical, stray light, and computational fluid dynamics analysis capabilities. LLIMAS software is highly extensible and has developed organically across a variety of technologies including laser communications, directed energy, photometric detectors, chemical sensing, laser radar, and imaging systems. The custom software architecture leverages the capabilities of existing industry standard commercial software and supports the incorporation of internally developed tools. Recent advances in LLIMAS's Structural-Thermal-Optical Performance (STOP), aeromechanical, and aero-optical capabilities as applied to Lincoln prototypes are presented.
NASA Astrophysics Data System (ADS)
Alam, Muhammad Ashraful; Khan, M. Ryyan
2016-10-01
Bifacial tandem cells promise to reduce three fundamental losses (i.e., above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction photovoltaic (PV) systems. The successive filtering of light through the bandgap cascade and the requirement of current continuity make optimization of tandem cells difficult and accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple analytical approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can be obtained with a few lines of algebra. This powerful approach reproduces, as special cases, all of the known results of conventional and bifacial tandem cells and highlights the asymptotic efficiency gain of these technologies.
Phylogenetic search through partial tree mixing
2012-01-01
Background Recent advances in sequencing technology have created large data sets upon which phylogenetic inference can be performed. Current research is limited by the prohibitive time necessary to perform tree search on a reasonable number of individuals. This research develops new phylogenetic algorithms that can operate on tens of thousands of species in a reasonable amount of time through several innovative search techniques. Results When compared to popular phylogenetic search algorithms, better trees are found much more quickly for large data sets. These algorithms are incorporated in the PSODA application available at http://dna.cs.byu.edu/psoda Conclusions The use of Partial Tree Mixing in a partition based tree space allows the algorithm to quickly converge on near optimal tree regions. These regions can then be searched in a methodical way to determine the overall optimal phylogenetic solution. PMID:23320449
Optimal and heuristic algorithms of planning of low-rise residential buildings
NASA Astrophysics Data System (ADS)
Kartak, V. M.; Marchenko, A. A.; Petunin, A. A.; Sesekin, A. N.; Fabarisova, A. I.
2017-10-01
The problem of the optimal layout of low-rise residential building is considered. Each apartment must be no less than the corresponding apartment from the proposed list. Also all requests must be made and excess of the total square over of the total square of apartment from the list must be minimized. The difference in the squares formed due to with the discreteness of distances between bearing walls and a number of other technological limitations. It shown, that this problem is NP-hard. The authors built a linear-integer model and conducted her qualitative analysis. As well, authors developed a heuristic algorithm for the solution tasks of a high dimension. The computational experiment was conducted which confirming the efficiency of the proposed approach. Practical recommendations on the use the proposed algorithms are given.
Single-machine group scheduling problems with deteriorating and learning effect
NASA Astrophysics Data System (ADS)
Xingong, Zhang; Yong, Wang; Shikun, Bai
2016-07-01
The concepts of deteriorating jobs and learning effects have been individually studied in many scheduling problems. However, most studies considering the deteriorating and learning effects ignore the fact that production efficiency can be increased by grouping various parts and products with similar designs and/or production processes. This phenomenon is known as 'group technology' in the literature. In this paper, a new group scheduling model with deteriorating and learning effects is proposed, where learning effect depends not only on job position, but also on the position of the corresponding job group; deteriorating effect depends on its starting time of the job. This paper shows that the makespan and the total completion time problems remain polynomial optimal solvable under the proposed model. In addition, a polynomial optimal solution is also presented to minimise the maximum lateness problem under certain agreeable restriction.
Run-of-river power plants in Alpine regions: whither optimal capacity?
NASA Astrophysics Data System (ADS)
Lazzaro, Gianluca; Botter, Gianluca
2015-04-01
Hydropower is the major renewable electricity generation technology worldwide. The future expansion of this technology mostly relies on the development of small run-of-river projects, in which a fraction of the running flows is diverted from the river to a turbine for energy production. Even though small hydro inflicts a smaller impact on aquatic ecosystems and local communities compared to large dams, it cannot prevent stresses on plant, animal, and human well-being. This is especially true in mountain regions where the plant outflow is located several kilometers downstream of the intake, thereby inducing the depletion of river reaches of considerable length. Moreover, the negative cumulative effects of run-of-river systems operating along the same river threaten the ability of stream networks to supply ecological corridors for plants, invertebrates or fishes, and support biodiversity. Research in this area is severely lacking. Therefore, the prediction of the long-term impacts associated to the expansion of run-of-river projects induced by global-scale incentive policies remains highly uncertain. This contribution aims at providing objective tools to address the preliminary choice of the capacity of a run-of-river hydropower plant when the economic value of the plant and the alteration of the flow regime are simultaneously accounted for. This is done using the concepts of Pareto-optimality and Pareto-dominance, which are powerful tools suited to face multi-objective optimization in presence of conflicting goals, such as the maximization of the profitability and the minimization of the hydrologic disturbance induced by the plant in the river reach between the intake and the outflow. The application to a set of case studies belonging to the Piave River basin (Italy) suggests that optimal solutions are strongly dependent the natural flow regime at the plant intake. While in some cases (namely, reduced streamflow variability) the optimal trade-off between economic profitability and hydrologic disturbance is well identified, in other cases (enhanced streamflow variability) multiple options and/or ranges of optimal capacities may be devised. Such alternatives offer to water managers an objective basis to identify optimal allocation of resources and policy actions. Small hydro technology is likely to gain a higher social value in the next decades if the environmental and hydrologic footprint associated to the energetic exploitation of surface water will take a higher priority in civil infrastructures planning.
Kazi, Dhruv S.; Greenough, P. Gregg; Madhok, Rishi; Heerboth, Aaron; Shaikh, Ahmed; Leaning, Jennifer; Balsari, Satchit
2017-01-01
Abstract Background Planning for mass gatherings often includes temporary healthcare systems to address the needs of attendees. However, paper-based record keeping has traditionally precluded the timely application of collected clinical data for epidemic surveillance or optimization of healthcare delivery. We evaluated the feasibility of harnessing ubiquitous mobile technologies for conducting disease surveillance and monitoring resource utilization at the Allahabad Kumbh Mela in India, a 55-day festival attended by over 70 million people. Methods We developed an inexpensive, tablet-based customized disease surveillance system with real-time analytic capabilities, and piloted it at five field hospitals. Results The system captured 49 131 outpatient encounters over the 3-week study period. The most common presenting complaints were musculoskeletal pain (19%), fever (17%), cough (17%), coryza (16%) and diarrhoea (5%). The majority of patients received at least one prescription. The most common prescriptions were for antimicrobials, acetaminophen and non-steroidal anti-inflammatory drugs. There was great inter-site variability in caseload with the busiest hospital seeing 650% more patients than the least busy hospital, despite identical staffing. Conclusions Mobile-based health information solutions developed with a focus on user-centred design can be successfully deployed at mass gatherings in resource-scarce settings to optimize care delivery by providing real-time access to field data. PMID:27694349
Kazi, Dhruv S; Greenough, P Gregg; Madhok, Rishi; Heerboth, Aaron; Shaikh, Ahmed; Leaning, Jennifer; Balsari, Satchit
2017-09-01
Planning for mass gatherings often includes temporary healthcare systems to address the needs of attendees. However, paper-based record keeping has traditionally precluded the timely application of collected clinical data for epidemic surveillance or optimization of healthcare delivery. We evaluated the feasibility of harnessing ubiquitous mobile technologies for conducting disease surveillance and monitoring resource utilization at the Allahabad Kumbh Mela in India, a 55-day festival attended by over 70 million people. We developed an inexpensive, tablet-based customized disease surveillance system with real-time analytic capabilities, and piloted it at five field hospitals. The system captured 49 131 outpatient encounters over the 3-week study period. The most common presenting complaints were musculoskeletal pain (19%), fever (17%), cough (17%), coryza (16%) and diarrhoea (5%). The majority of patients received at least one prescription. The most common prescriptions were for antimicrobials, acetaminophen and non-steroidal anti-inflammatory drugs. There was great inter-site variability in caseload with the busiest hospital seeing 650% more patients than the least busy hospital, despite identical staffing. Mobile-based health information solutions developed with a focus on user-centred design can be successfully deployed at mass gatherings in resource-scarce settings to optimize care delivery by providing real-time access to field data. © The Author 2016. Published by Oxford University Press on behalf of Faculty of Public Health.
Combined optimization model for sustainable energization strategy
NASA Astrophysics Data System (ADS)
Abtew, Mohammed Seid
Access to energy is a foundation to establish a positive impact on multiple aspects of human development. Both developed and developing countries have a common concern of achieving a sustainable energy supply to fuel economic growth and improve the quality of life with minimal environmental impacts. The Least Developing Countries (LDCs), however, have different economic, social, and energy systems. Prevalence of power outage, lack of access to electricity, structural dissimilarity between rural and urban regions, and traditional fuel dominance for cooking and the resultant health and environmental hazards are some of the distinguishing characteristics of these nations. Most energy planning models have been designed for developed countries' socio-economic demographics and have missed the opportunity to address special features of the poor countries. An improved mixed-integer programming energy-source optimization model is developed to address limitations associated with using current energy optimization models for LDCs, tackle development of the sustainable energization strategies, and ensure diversification and risk management provisions in the selected energy mix. The Model predicted a shift from traditional fuels reliant and weather vulnerable energy source mix to a least cost and reliable modern clean energy sources portfolio, a climb on the energy ladder, and scored multifaceted economic, social, and environmental benefits. At the same time, it represented a transition strategy that evolves to increasingly cleaner energy technologies with growth as opposed to an expensive solution that leapfrogs immediately to the cleanest possible, overreaching technologies.
Nebane, N Miranda; Coric, Tatjana; McKellip, Sara; Woods, LaKeisha; Sosa, Melinda; Rasmussen, Lynn; Bjornsti, Mary-Ann; White, E Lucile
2016-02-01
The development of acoustic droplet ejection (ADE) technology has resulted in many positive changes associated with the operations in a high-throughput screening (HTS) laboratory. Originally, this liquid transfer technology was used to simply transfer DMSO solutions of primarily compounds. With the introduction of Labcyte's Echo 555, which has aqueous dispense capability, the application of this technology has been expanded beyond its original use. This includes the transfer of many biological reagents solubilized in aqueous buffers, including siRNAs. The Echo 555 is ideal for siRNA dispensing because it is accurate at low volumes and a step-down dilution is not necessary. The potential for liquid carryover and cross-contamination is eliminated, as no tips are needed. Herein, we describe the siRNA screening platform at Southern Research's HTS Center using the ADE technology. With this technology, an siRNA library can be dispensed weeks or even months in advance of the assay itself. The protocol has been optimized to achieve assay parameters comparable to small-molecule screening parameters, and exceeding the norm reported for genomewide siRNA screens. © 2015 Society for Laboratory Automation and Screening.
Neighboring Optimal Aircraft Guidance in a General Wind Environment
NASA Technical Reports Server (NTRS)
Jardin, Matthew R. (Inventor)
2003-01-01
Method and system for determining an optimal route for an aircraft moving between first and second waypoints in a general wind environment. A selected first wind environment is analyzed for which a nominal solution can be determined. A second wind environment is then incorporated; and a neighboring optimal control (NOC) analysis is performed to estimate an optimal route for the second wind environment. In particular examples with flight distances of 2500 and 6000 nautical miles in the presence of constant or piecewise linearly varying winds, the difference in flight time between a nominal solution and an optimal solution is 3.4 to 5 percent. Constant or variable winds and aircraft speeds can be used. Updated second wind environment information can be provided and used to obtain an updated optimal route.
Generalized bipartite quantum state discrimination problems with sequential measurements
NASA Astrophysics Data System (ADS)
Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki
2018-02-01
We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.
COMETBOARDS Can Optimize the Performance of a Wave-Rotor-Topped Gas Turbine Engine
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.
1997-01-01
A wave rotor, which acts as a high-technology topping spool in gas turbine engines, can increase the effective pressure ratio as well as the turbine inlet temperature in such engines. The wave rotor topping, in other words, may significantly enhance engine performance by increasing shaft horse power while reducing specific fuel consumption. This performance enhancement requires optimum selection of the wave rotor's adjustable parameters for speed, surge margin, and temperature constraints specified on different engine components. To examine the benefit of the wave rotor concept in engine design, researchers soft coupled NASA Lewis Research Center's multidisciplinary optimization tool COMETBOARDS and the NASA Engine Performance Program (NEPP) analyzer. The COMETBOARDS-NEPP combined design tool has been successfully used to optimize wave-rotor-topped engines. For illustration, the design of a subsonic gas turbine wave-rotor-enhanced engine with four ports for 47 mission points (which are specified by Mach number, altitude, and power-setting combinations) is considered. The engine performance analysis, constraints, and objective formulations were carried out through NEPP, and COMETBOARDS was used for the design optimization. So that the benefits that accrue from wave rotor enhancement could be examined, most baseline variables and constraints were declared to be passive, whereas important parameters directly associated with the wave rotor were considered to be active for the design optimization. The engine thrust was considered as the merit function. The wave rotor engine design, which became a sequence of 47 optimization subproblems, was solved successfully by using a cascade strategy available in COMETBOARDS. The graph depicts the optimum COMETBOARDS solutions for the 47 mission points, which were normalized with respect to standard results. As shown, the combined tool produced higher thrust for all mission points than did the other solution, with maximum benefits around mission points 11, 25, and 31. Such improvements can become critical, especially when engines are sized for these specific mission points.
Improving spacecraft design using a multidisciplinary design optimization methodology
NASA Astrophysics Data System (ADS)
Mosher, Todd Jon
2000-10-01
Spacecraft design has gone from maximizing performance under technology constraints to minimizing cost under performance constraints. This is characteristic of the "faster, better, cheaper" movement that has emerged within NASA. Currently spacecraft are "optimized" manually through a tool-assisted evaluation of a limited set of design alternatives. With this approach there is no guarantee that a systems-level focus will be taken and "feasibility" rather than "optimality" is commonly all that is achieved. To improve spacecraft design in the "faster, better, cheaper" era, a new approach using multidisciplinary design optimization (MDO) is proposed. Using MDO methods brings structure to conceptual spacecraft design by casting a spacecraft design problem into an optimization framework. Then, through the construction of a model that captures design and cost, this approach facilitates a quicker and more straightforward option synthesis. The final step is to automatically search the design space. As computer processor speed continues to increase, enumeration of all combinations, while not elegant, is one method that is straightforward to perform. As an alternative to enumeration, genetic algorithms are used and find solutions by reviewing fewer possible solutions with some limitations. Both methods increase the likelihood of finding an optimal design, or at least the most promising area of the design space. This spacecraft design methodology using MDO is demonstrated on three examples. A retrospective test for validation is performed using the Near Earth Asteroid Rendezvous (NEAR) spacecraft design. For the second example, the premise that aerobraking was needed to minimize mission cost and was mission enabling for the Mars Global Surveyor (MGS) mission is challenged. While one might expect no feasible design space for an MGS without aerobraking mission, a counterintuitive result is discovered. Several design options that don't use aerobraking are feasible and cost effective. The third example is an original commercial lunar mission entitled Eagle-eye. This example shows how an MDO approach is applied to an original mission with a larger feasible design space. It also incorporates a simplified business case analysis.
Optimal moving grids for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Wathen, A. J.
1989-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of partial differential equation solutions in the least squares norm.
Optimal moving grids for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Wathen, A. J.
1992-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.
NASA Astrophysics Data System (ADS)
Lee, Karen; Lacombe, Y.; Cheluget, E.
2008-07-01
The Advanced SCLAIRTECH™ Technology process is used to manufacture Linear Low Density Polyethylene using solution polymerization. In this process ethylene is polymerized in an inert solvent, which is subsequently evaporated and recycled. The reactor effluent in the process is a polymer solution containing the polyethylene product, which is separated from the solvent and unconverted ethylene/co-monomer before being extruded and pelletized. The design of unit operations in this process requires a detailed understanding of the thermophysical properties, phase behaviour and rheology of polymer containing streams at high temperature and pressure, and over a wide range of composition. This paper describes a device used to thermo-rheologically characterize polymer solutions under conditions prevailing in polymerization reactors, downstream heat exchangers and attendant phase separation vessels. The downstream processing of the Advanced SCLAIRTECH™ Technology reactor effluent occurs at temperatures and pressures near the critical point of the solvent and co-monomer mixture. In addition, the process trajectory encompasses regions of liquid-liquid and liquid-liquid-vapour co-existence, which are demarcated by a `cloud point' curve. Knowing the location of this phase boundary is essential for the design of downstream devolatilization processes and for optimizing operating conditions in existing plants. In addition, accurate solution rheology data are required for reliable equipment sizing and design. At NOVA Chemicals, a robust high-temperature and high-pressure-capable version of the Multi-Pass Rheometer (MPR) is used to provide data on solution rheology and phase boundary location. This sophisticated piece of equipment is used to quantify the effects of solvent types, comonomer, and free ethylene concentration on the properties of the reactor effluent. An example of the experimental methodology to characterize a polyethylene solution with hexane solvent, and the ethylene dosing technique developed for the MPR will be described. ™Advanced SCLAIRTECH is a trademark of NOVA Chemicals.
New numerical methods for open-loop and feedback solutions to dynamic optimization problems
NASA Astrophysics Data System (ADS)
Ghosh, Pradipto
The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development is that the resulting control law has an algebraic closed-form structure. The proposed method uses an optimal spatial statistical predictor called universal kriging to construct the surrogate model of a feedback controller, which is capable of quickly predicting an optimal control estimate based on current state (and time) information. With universal kriging, an approximation to the optimal feedback map is computed by conceptualizing a set of state-control samples from pre-computed extremals to be a particular realization of a jointly Gaussian spatial process. Feedback policies are computed for a variety of example dynamic optimization problems in order to evaluate the effectiveness of this methodology. This feedback synthesis approach is found to combine good numerical accuracy with low computational overhead, making it a suitable candidate for real-time applications. Particle swarm and universal kriging are combined for a capstone example, a near optimal, near-admissible, full-state feedback control law is computed and tested for the heat-load-limited atmospheric-turn guidance of an aeroassisted transfer vehicle. The performance of this explicit guidance scheme is found to be very promising; initial errors in atmospheric entry due to simulated thruster misfirings are found to be accurately corrected while closely respecting the algebraic state-inequality constraint.
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
Liu, Lu; Masfary, Osama; Antonopoulos, Nick
2012-01-01
The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions. PMID:22778660
Properties of light reflected from road signs in active imaging.
Halstuch, Aviran; Yitzhaky, Yitzhak
2008-08-01
Night vision systems in vehicles are a new emerging technology. A crucial problem in active (laser-based) systems is distortion of images by saturation and blooming due to strong retroreflections from road signs. We quantify this phenomenon. We measure the Mueller matrices and the polarization state of the reflected light from three different types of road sign commonly used. Measurements of the reflected intensity are also taken with respect to the angle of reflection. We find that different types of sign have different reflection properties. It is concluded that the optimal solution for attenuating the retroreflected intensity is using a linear polarized light source and a linear polarizer with perpendicular orientation (with regard to the source) at the detector. Unfortunately, while this solution performs well for two types of road sign, it is less efficient for the third sign type.
Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio; Ross, Kevin
2009-01-01
Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
2017-11-09
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
Deeper and sparser nets are optimal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1998-03-01
The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions (Home and Hush, 1994); and (2) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Home and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will provemore » that size-optimal solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
Optimization Under Uncertainty for Wake Steering Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, Julian; Annoni, Jennifer; King, Ryan N
Offsetting turbines' yaw orientations from incoming wind is a powerful tool that may be leveraged to reduce undesirable wake effects on downstream turbines. First, we examine a simple two-turbine case to gain intuition as to how inflow direction uncertainty affects the optimal solution. The turbines are modeled with unidirectional inflow such that one turbine directly wakes the other, using ten rotor diameter spacing. We perform optimization under uncertainty (OUU) via a parameter sweep of the front turbine. The OUU solution generally prefers less steering. We then do this optimization for a 60-turbine wind farm with unidirectional inflow, varying the degreemore » of inflow uncertainty and approaching this OUU problem by nesting a polynomial chaos expansion uncertainty quantification routine within an outer optimization. We examined how different levels of uncertainty in the inflow direction effect the ratio of the expected values of deterministic and OUU solutions for steering strategies in the large wind farm, assuming the directional uncertainty used to reach said OUU solution (this ratio is defined as the value of the stochastic solution or VSS).« less
Point Positioning Service for Natural Hazard Monitoring
NASA Astrophysics Data System (ADS)
Bar-Sever, Y. E.
2014-12-01
In an effort to improve natural hazard monitoring, JPL has invested in updating and enlarging its global real-time GNSS tracking network, and has launched a unique service - real-time precise positioning for natural hazard monitoring, entitled GREAT Alert (GNSS Real-Time Earthquake and Tsunami Alert). GREAT Alert leverages the full technological and operational capability of the JPL's Global Differential GPS System [www.gdgps.net] to offer owners of real-time dual-frequency GNSS receivers: Sub-5 cm (3D RMS) real-time, absolute positioning in ITRF08, regardless of location Under 5 seconds turnaround time Full covariance information Estimates of ancillary parameters (such as troposphere) optionally provided This service enables GNSS networks operators to instantly have access to the most accurate and reliable real-time positioning solutions for their sites, and also to the hundreds of participating sites globally, assuring inter-consistency and uniformity across all solutions. Local authorities with limited technical and financial resources can now access to the best technology, and share environmental data to the benefit of the entire pacific region. We will describe the specialized precise point positioning techniques employed by the GREAT Alert service optimized for natural hazard monitoring, and in particular Earthquake monitoring. We address three fundamental aspects of these applications: 1) small and infrequent motion, 2) the availability of data at a central location, and 3) the need for refined solutions at several time scales
NASA Astrophysics Data System (ADS)
Mughal, Maqsood Ali
Clean and environmentally friendly technologies are centralizing industry focus towards obtaining long term solutions to many large-scale problems such as energy demand, pollution, and environmental safety. Thin film solar cell (TFSC) technology has emerged as an impressive photovoltaic (PV) technology to create clean energy from fast production lines with capabilities to reduce material usage and energy required to manufacture large area panels, hence, lowering the costs. Today, cost ($/kWh) and toxicity are the primary challenges for all PV technologies. In that respect, electrodeposited indium sulfide (In2S3) films are proposed as an alternate to hazardous cadmium sulfide (CdS) films, commonly used as buffer layers in solar cells. This dissertation focuses upon the optimization of electrodeposition parameters to synthesize In2S3 films of PV quality. The work describe herein has the potential to reduce the hazardous impact of cadmium (Cd) upon the environment, while reducing the manufacturing cost of TFSCs through efficient utilization of materials. Optimization was performed through use of a statistical approach to study the effect of varying electrodeposition parameters upon the properties of the films. A robust design method referred-to as the "Taguchi Method" helped in engineering the properties of the films, and improved the PV characteristics including optical bandgap, absorption coefficient, stoichiometry, morphology, crystalline structure, thickness, etc. Current density (also a function of deposition voltage) had the most significant impact upon the stoichiometry and morphology of In2S3 films, whereas, deposition temperature and composition of the solution had the least significant impact. The dissertation discusses the film growth mechanism and provides understanding of the regions of low quality (for example, cracks) in films. In2S3 films were systematically and quantitatively investigated by varying electrodeposition parameters including bath composition, current density, deposition time and temperature, stir rate, and electrode potential. These parameters individually and collectively exhibited significant correlation with the properties of the films. Digital imaging analysis (using fracture and buckling analysis software) of scanning electron microscope (SEM) images helped to quantify the cracks and study the defects in films. In addition, the effects of different annealing treatments (200 oC, 300 oC, and 400 oC in air) and coated-glass substrates (Mo, ITO, FTO) upon the properties of the In2S3 films were analyzed.
Objectively Optimized Observation Direction System Providing Situational Awareness for a Sensor Web
NASA Astrophysics Data System (ADS)
Aulov, O.; Lary, D. J.
2010-12-01
There is great utility in having a flexible and automated objective observation direction system for the decadal survey missions and beyond. Such a system allows us to optimize the observations made by suite of sensors to address specific goals from long term monitoring to rapid response. We have developed such a prototype using a network of communicating software elements to control a heterogeneous network of sensor systems, which can have multiple modes and flexible viewing geometries. Our system makes sensor systems intelligent and situationally aware. Together they form a sensor web of multiple sensors working together and capable of automated target selection, i.e. the sensors “know” where they are, what they are able to observe, what targets and with what priorities they should observe. This system is implemented in three components. The first component is a Sensor Web simulator. The Sensor Web simulator describes the capabilities and locations of each sensor as a function of time, whether they are orbital, sub-orbital, or ground based. The simulator has been implemented using AGIs Satellite Tool Kit (STK). STK makes it easy to analyze and visualize optimal solutions for complex space scenarios, and perform complex analysis of land, sea, air, space assets, and shares results in one integrated solution. The second component is target scheduler that was implemented with STK Scheduler. STK Scheduler is powered by a scheduling engine that finds better solutions in a shorter amount of time than traditional heuristic algorithms. The global search algorithm within this engine is based on neural network technology that is capable of finding solutions to larger and more complex problems and maximizing the value of limited resources. The third component is a modeling and data assimilation system. It provides situational awareness by supplying the time evolution of uncertainty and information content metrics that are used to tell us what we need to observe and the priority we should give to the observations. A prototype of this component was implemented with AutoChem. AutoChem is NASA release software constituting an automatic code generation, symbolic differentiator, analysis, documentation, and web site creation tool for atmospheric chemical modeling and data assimilation. Its model is explicit and uses an adaptive time-step, error monitoring time integration scheme for stiff systems of equations. AutoChem was the first model to ever have the facility to perform 4D-Var data assimilation and Kalman filter. The project developed a control system with three main accomplishments. First, fully multivariate observational and theoretical information with associated uncertainties was combined using a full Kalman filter data assimilation system. Second, an optimal distribution of the computations and of data queries was achieved by utilizing high performance computers/load balancing and a set of automatically mirrored databases. Third, inter-instrument bias correction was performed using machine learning. The PI for this project was Dr. David Lary of the UMBC Joint Center for Earth Systems Technology at NASA/Goddard Space Flight Center.
Product Mix Selection Using AN Evolutionary Technique
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Vasant, Pandian
2009-08-01
This paper proposes an evolutionary technique for the solution of a real—life industrial problem and particular for the product mix selection problem. The evolutionary technique is a combination of a genetic algorithm that preserves the feasibility of the trial solutions with penalties and some local optimization method. The goal of this paper has been achieved in finding the best near optimal solution for the profit fitness function respect to vagueness factor and level of satisfaction. The findings of the profit values will be very useful for the decision makers in the industrial engineering sector for the implementation purpose. It's possible to improve the solutions obtained in this study by employing other meta-heuristic methods such as simulated annealing, tabu Search, ant colony optimization, particle swarm optimization and artificial immune systems.
Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Weihong; Sun, Kai; Qi, Junjian
2015-01-01
Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Research on logistics scheduling based on PSO
NASA Astrophysics Data System (ADS)
Bao, Huifang; Zhou, Linli; Liu, Lei
2017-08-01
With the rapid development of e-commerce based on the network, the logistics distribution support of e-commerce is becoming more and more obvious. The optimization of vehicle distribution routing can improve the economic benefit and realize the scientific of logistics [1]. Therefore, the study of logistics distribution vehicle routing optimization problem is not only of great theoretical significance, but also of considerable value of value. Particle swarm optimization algorithm is a kind of evolutionary algorithm, which is based on the random solution and the optimal solution by iteration, and the quality of the solution is evaluated through fitness. In order to obtain a more ideal logistics scheduling scheme, this paper proposes a logistics model based on particle swarm optimization algorithm.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...
2017-07-25
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
Piromalis, Dimitrios; Arvanitis, Konstantinos
2016-08-04
Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture.
Ren, Gang; Liu, Rong-hua; Shao, Feng; Huang, Hui-lian; Wen, Li-rong
2010-08-01
To study the technology optimization for extraction and purification of total flavones from root bark of Artocarpus styracifolius. The optimum extraction conditions were investigated by the contents of the total flavones, using orthogonal test; Static adsorption capacity and desorption rate were employed as examine items for the screening of optimum macroporous resin and optimum technology for the purification of total flavones with selected macroporous were also investigated. The optimum extraction conditions were as follows: using 60% alcohol of seven times than amounts of original material soaking 12 hours,extracting once with hot reflux method at 50 degrees C. HPD-500 type macroporous resin showed better adsorption and desorption property. The optimum purification conditions were as follows: the sample solution was prepared at the concentration of 50.0 mg/mL, subjected to HPD-500 type macroporous resin column chromatography with a load ratio of 22.0 mg total flavones per gram of resin. After standing for 1 hour, the column was eluted with 4 BV water before being eluted with 4 BV 80% alcohol. The purity of the product was 86.4%, which enhanced the content of total flavones by 533%. The optimum conditions for extraction and purification of total flavones from root bark of Artocarpus styractifolius are convenient and practical, and could be used as a reference for industrial production.
Cybersecurity and Optimization in Smart “Autonomous” Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mylrea, Michael E.; Gourisetti, Sri Nikhil Gup
Significant resources have been invested in making buildings “smart” by digitizing, networking and automating key systems and operations. Smart autonomous buildings create new energy efficiency, economic and environmental opportunities. But as buildings become increasingly networked to the Internet, they can also become more vulnerable to various cyber threats. Automated and Internet-connected buildings systems, equipment, controls, and sensors can significantly increase cyber and physical vulnerabilities that threaten the confidentiality, integrity, and availability of critical systems in organizations. Securing smart autonomous buildings presents a national security and economic challenge to the nation. Ignoring this challenge threatens business continuity and the availability ofmore » critical infrastructures that are enabled by smart buildings. In this chapter, the authors address challenges and explore new opportunities in securing smart buildings that are enhanced by machine learning, cognitive sensing, artificial intelligence (AI) and smart-energy technologies. The chapter begins by identifying cyber-threats and challenges to smart autonomous buildings. Then it provides recommendations on how AI enabled solutions can help smart buildings and facilities better protect, detect and respond to cyber-physical threats and vulnerabilities. Next, the chapter will provide case studies that examine how combining AI with innovative smart-energy technologies can increase both cybersecurity and energy efficiency savings in buildings. The chapter will conclude by proposing recommendations for future cybersecurity and energy optimization research for examining AI enabled smart-energy technology.« less
Non linear predictive control of a LEGO mobile robot
NASA Astrophysics Data System (ADS)
Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.
2014-10-01
Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.
Optimal design of an alignment-free two-DOF rehabilitation robot for the shoulder complex.
Galinski, Daniel; Sapin, Julien; Dehez, Bruno
2013-06-01
This paper presents the optimal design of an alignment-free exoskeleton for the rehabilitation of the shoulder complex. This robot structure is constituted of two actuated joints and is linked to the arm through passive degrees of freedom (DOFs) to drive the flexion-extension and abduction-adduction movements of the upper arm. The optimal design of this structure is performed through two steps. The first step is a multi-objective optimization process aiming to find the best parameters characterizing the robot and its position relative to the patient. The second step is a comparison process aiming to select the best solution from the optimization results on the basis of several criteria related to practical considerations. The optimal design process leads to a solution outperforming an existing solution on aspects as kinematics or ergonomics while being more simple.
A genetic algorithm solution to the unit commitment problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.
1996-02-01
This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 unitsmore » and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NATHAN HANCOCK
2013-01-13
The purpose of this study is to design (i) a stripper system where heat is used to strip ammonia (NH{sub 3}) and carbon dioxide (CO{sub 2}) from a diluted draw solution; and (ii) a condensation or absorption system where the stripped NH{sub 3} and CO{sub 2} are captured in condensed water to form a re-concentrated draw solution. This study supports the Industrial Technologies Program of the DOE Office of Energy Efficiency and Renewable Energy and their Industrial Energy Efficiency Grand Challenge award solicitation. Results from this study show that stimulated Oasys draw solutions composed of a complex electrolyte solution associatedmore » with the dissolution of NH{sub 3} and CO{sub 2} gas in water can successfully be stripped and fully condensed under standard atmospheric pressure. Stripper bottoms NH{sub 3} concentration can reliably be reduced to < 1 mg/L, even when starting with liquids that have an NH{sub 3} mass fraction exceeding 6% to stimulate diluted draw solution from the forward osmosis membrane component of the process. Concentrated draw solution produced by fully condensing the stripper tops was show to exceed 6 M-C with nitrogen-to-carbon (N:C) molar ratios on the order of two. Reducing the operating pressure of the stripper column serves to reduce the partial vapor pressure of both NH{sub 3} and CO{sub 2} in solution and enables lower temperature operation towards integration of industrial low-grade of waste heat. Effective stripping of solutes was observed with operating pressures as low as 100 mbar (3-inHg). Systems operating at reduced pressure and temperature require additional design considerations to fully condense and absorb these constituents for reuse within the Oasys EO system context. Comparing empirical data with process stimulation models confirmed that several key parameters related to vapor-liquid equilibrium and intrinsic material properties were not accurate. Additional experiments and refinement of material property databases within the chosen process stimulation software was required to improve the reliability of process simulations for engineering design support. Data from experiments was also employed to calculate critical mass transfer and system design parameters (such as the height equivalent to a theoretical plate (HETP)) to aid in process design. When measured in a less than optimal design state for the stripping of NH{sub 3} and CO{sub 2} from a simulated dilute draw solution the HETP for one type of commercial stripper packing material was 1.88 ft/stage. During this study it was observed that the heat duty required to vaporize the draw solution solutes is substantially affected by the amount of water boilup also produced to achieve a low NH{sub 3} stripper bottoms concentration specification. Additionally, fluid loading of the stripper packing media is a critical performance parameter that affects all facets of optimum stripper column performance. Condensation of the draw solution tops vapor requires additional process considerations if being conducted in sub-atmospheric conditions and low temperature. Future work will focus on the commercialization of the Oasys EO technology platform for numerous applications in water and wastewater treatment as well as harvesting low enthalpy energy with our proprietary osmotic heat engine. Engineering design related to thermal integration of Oasys EO technology for both low and hig-grade heat applications is underway. Novel thermal recovery processes are also being investigated in addition to the conventional approaches described in this report. Oasys Water plans to deploy commercial scale systems into the energy and zero liquid discharge markets in 2013. Additional process refinement will lead to integration of low enthalpy renewable heat sources for municipal desalination applications.« less
2010-06-01
scanners, readers, or imagers. These types of ADCS devices use two slightly different technologies. Laser scanners use a photodiode to measure the...structure of a ship, but the LCS utilizes modular mission packages that can be removed and replaced when the threat , environment, or mission changes...would need to support a wide array of business applications and users (Clarion, 2009). The DoD’s solution to this deficiency is called IUID. IUID is a
[From the x-ray department to the institute for imaging diagnosis].
Voegeli, E; Steck, W
1985-02-01
The increasing sophistication of diagnostic radiology has led to rising emphasis on modality-related training and practice in radiological subspecialties. To accomplish both optimal patient management and a rational, cost-effective analysis of imaging procedures, a comprehensive approach to modern radiology is needed rather than a technology-related attitude. The imaging department, where the various imaging data are synthesized and correlation by a general practitioner of radiology, as opposed to the subspecialty radiologist, is the most suitable solution. The principles of management and the layout of such a center are described by the authors.
On a computational model of building thermal dynamic response
NASA Astrophysics Data System (ADS)
Jarošová, Petra; Vala, Jiří
2016-07-01
Development and exploitation of advanced materials, structures and technologies in civil engineering, both for buildings with carefully controlled interior temperature and for common residential houses, together with new European and national directives and technical standards, stimulate the development of rather complex and robust, but sufficiently simple and inexpensive computational tools, supporting their design and optimization of energy consumption. This paper demonstrates the possibility of consideration of such seemingly contradictory requirements, using the simplified non-stationary thermal model of a building, motivated by the analogy with the analysis of electric circuits; certain semi-analytical forms of solutions come from the method of lines.
Solutions for medical databases optimal exploitation
Branescu, I; Purcarea, VL; Dobrescu, R
2014-01-01
The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769
Optimality conditions for the numerical solution of optimization problems with PDE constraints :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Ridzal, Denis
2014-03-01
A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.
Optimal and Autonomous Control Using Reinforcement Learning: A Survey.
Kiumarsi, Bahare; Vamvoudakis, Kyriakos G; Modares, Hamidreza; Lewis, Frank L
2018-06-01
This paper reviews the current state of the art on reinforcement learning (RL)-based feedback control solutions to optimal regulation and tracking of single and multiagent systems. Existing RL solutions to both optimal and control problems, as well as graphical games, will be reviewed. RL methods learn the solution to optimal control and game problems online and using measured data along the system trajectories. We discuss Q-learning and the integral RL algorithm as core algorithms for discrete-time (DT) and continuous-time (CT) systems, respectively. Moreover, we discuss a new direction of off-policy RL for both CT and DT systems. Finally, we review several applications.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Noise and LPI radar as part of counter-drone mitigation system measures
NASA Astrophysics Data System (ADS)
Zhang, Yan (Rockee); Huang, Yih-Ru; Thumann, Charles
2017-05-01
With the rapid proliferation of small unmanned aerial systems (UAS) in the national airspace, small operational drones are being sometimes considered as a security threat for critical infrastructures, such as sports stadiums, military facilities, and airports. There have been many civilian counter-drone solutions and products reported, including radar and electromagnetic counter measures. For the current electromagnetic solutions, they are usually limited to particular type of detection and counter-measure scheme, which is usually effective for the specific type of drones. Also, control and communication link technologies used in even RC drones nowadays are more sophisticated, making them more difficult to detect, decode and counter. Facing these challenges, our team proposes a "software-defined" solution based on noise and LPI radar. For the detection, wideband-noise radar has the resolution performance to discriminate possible micro-Doppler features of the drone versus biological scatterers. It also has the benefit of more adaptive to different types of drones, and covertly detecting for security application. For counter-measures, random noise can be combined with "random sweeping" jamming scheme, to achieve the optimal balance between peak power allowed and the effective jamming probabilities. Some theoretical analysis of the proposed solution is provided in this study, a design case study is developed, and initial laboratory experiments, as well as outdoor tests are conducted to validate the basic concepts and theories. The study demonstrates the basic feasibilities of the Drone Detection and Mitigation Radar (DDMR) concept, while there are still much work needs to be done for a complete and field-worthy technology development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reister, D.B.
This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed unicycle. The time optimal paths consist of sequences of arcs of circles and straight lines. The maximum principle introduced concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature of the time optimal paths. 10 refs., 6 figs.
NASA Astrophysics Data System (ADS)
Liberatore, Raffaele; Ferrara, Mariarosaria; Lanchi, Michela; Turchetti, Luca
2017-06-01
It is widely agreed that hydrogen used as energy carrier and/or storage media may significantly contribute in the reduction of emissions, especially if produced by renewable energy sources. The Hybrid Sulfur (HyS) cycle is considered as one of the most promising processes to produce hydrogen through the water-splitting process. The FP7 project SOL2HY2 (Solar to Hydrogen Hybrid Cycles) investigates innovative material and process solutions for the use of solar heat and power in the HyS process. A significant part of the SOL2HY2 project is devoted to the analysis and optimization of the integration of the solar and chemical (hydrogen production) plants. In this context, this work investigates the possibility to integrate different solar technologies, namely photovoltaic, solar central receiver and solar troughs, to optimize their use in the HyS cycle for a green hydrogen production, both in the open and closed process configurations. The analysis carried out accounts for different combinations of geographical location and plant sizing criteria. The use of a sulfur burner, which can serve both as thermal backup and SO2 source for the open cycle, is also considered.
Wang, Kun; Chartrand, Patrice
2018-06-15
This paper presents a quantitative thermodynamic description for water, hydrogen fluoride and hydrogen dissolutions in cryolite-base molten salts, which is of technological importance to the Hall-Héroult electrolytic aluminum extraction cell. The Modified Quasichemical Model in the Quadruplet Approximation (MQMQA), as used to treat a large variety of molten salt systems, was adopted to thermodynamically describe the present liquid phase; all solid solutions were modeled using the Compound Energy Formalism (CEF); the gas phase was thermodynamically treated as an ideal mixture of all possible species. The model parameters were mainly obtained by critical evaluations and optimizations of thermodynamic and phase equilibrium data available from relative experimental measurements and theoretical predictions (first-principles calculations and empirical estimations) for the lower-order subsystems. These optimized model parameters were thereafter merged within the Kohler/Toop interpolation scheme, facilitating the prediction of gas solubility (H2O, HF and H2) in multicomponent cryolite-base molten salts using the FactSage thermochemical software. Several interesting diagrams were finally obtained in order to provide useful information for the industrial partners dedicated to the Hall-Héroult electrolytic aluminum production or other molten-salt technologies (the purification process and electroslag refining).
A Solar Chimney for renewable energy production: thermo-fluid dynamic optimization by CFD analyses
NASA Astrophysics Data System (ADS)
Montelpare, S.; D'Alessandro, V.; Zoppi, A.; Costanzo, E.
2017-11-01
This paper analyzes the performance of a solar tower designed for renewable energy production. The Solar Chimney Power Plant (SCPP) involves technology that converts solar energy by means of three basic components: a large circular solar collector, a high tower in the center of the collector and a turbine generator inside the chimney. SCPPs are characterized by long term operational life, low maintenance costs, zero use of fuels, no use of water and no emissions of greenhouse gases. The main problem of this technology is the low energy global conversion coefficient due to the presence of four conversions: solar radiation > thermal energy > kinetic energy > mechanical energy > electric energy. This paper defines its starting point from the well known power plant of Manzanares in order to calibrate a numerical model based on finite volumes. Following that, a solar tower with reduced dimensions was designed and an analysis on various geometric parameters was conducted: on the inlet section, on the collector slope, and on the fillet radius among the SUPP sections. Once the optimal solution was identified, a curved deflectors able to induce a flow swirl along the vertical tower axis was designed.
Exact solution of large asymmetric traveling salesman problems.
Miller, D L; Pekny, J F
1991-02-15
The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.
Multiobjective optimization of urban water resources: Moving toward more practical solutions
NASA Astrophysics Data System (ADS)
Mortazavi, Mohammad; Kuczera, George; Cui, Lijie
2012-03-01
The issue of drought security is of paramount importance for cities located in regions subject to severe prolonged droughts. The prospect of "running out of water" for an extended period would threaten the very existence of the city. Managing drought security for an urban water supply is a complex task involving trade-offs between conflicting objectives. In this paper a multiobjective optimization approach for urban water resource planning and operation is developed to overcome practically significant shortcomings identified in previous work. A case study based on the headworks system for Sydney (Australia) demonstrates the approach and highlights the potentially serious shortcomings of Pareto optimal solutions conditioned on short climate records, incomplete decision spaces, and constraints to which system response is sensitive. Where high levels of drought security are required, optimal solutions conditioned on short climate records are flawed. Our approach addresses drought security explicitly by identifying approximate optimal solutions in which the system does not "run dry" in severe droughts with expected return periods up to a nominated (typically large) value. In addition, it is shown that failure to optimize the full mix of interacting operational and infrastructure decisions and to explore the trade-offs associated with sensitive constraints can lead to significantly more costly solutions.
Large-angle slewing maneuvers for flexible spacecraft
NASA Technical Reports Server (NTRS)
Chun, Hon M.; Turner, James D.
1988-01-01
A new class of closed-form solutions for finite-time linear-quadratic optimal control problems is presented. The solutions involve Potter's solution for the differential matrix Riccati equation, which assumes the form of a steady-state plus transient term. Illustrative examples are presented which show that the new solutions are more computationally efficient than alternative solutions based on the state transition matrix. As an application of the closed-form solutions, the neighboring extremal path problem is presented for a spacecraft retargeting maneuver where a perturbed plant with off-nominal boundary conditions now follows a neighboring optimal trajectory. The perturbation feedback approach is further applied to three-dimensional slewing maneuvers of large flexible spacecraft. For this problem, the nominal solution is the optimal three-dimensional rigid body slew. The perturbation feedback then limits the deviations from this nominal solution due to the flexible body effects. The use of frequency shaping in both the nominal and perturbation feedback formulations reduces the excitation of high-frequency unmodeled modes. A modified Kalman filter is presented for estimating the plant states.
A mixed analog/digital chaotic neuro-computer system for quadratic assignment problems.
Horio, Yoshihiko; Ikeguchi, Tohru; Aihara, Kazuyuki
2005-01-01
We construct a mixed analog/digital chaotic neuro-computer prototype system for quadratic assignment problems (QAPs). The QAP is one of the difficult NP-hard problems, and includes several real-world applications. Chaotic neural networks have been used to solve combinatorial optimization problems through chaotic search dynamics, which efficiently searches optimal or near optimal solutions. However, preliminary experiments have shown that, although it obtained good feasible solutions, the Hopfield-type chaotic neuro-computer hardware system could not obtain the optimal solution of the QAP. Therefore, in the present study, we improve the system performance by adopting a solution construction method, which constructs a feasible solution using the analog internal state values of the chaotic neurons at each iteration. In order to include the construction method into our hardware, we install a multi-channel analog-to-digital conversion system to observe the internal states of the chaotic neurons. We show experimentally that a great improvement in the system performance over the original Hopfield-type chaotic neuro-computer is obtained. That is, we obtain the optimal solution for the size-10 QAP in less than 1000 iterations. In addition, we propose a guideline for parameter tuning of the chaotic neuro-computer system according to the observation of the internal states of several chaotic neurons in the network.
Absolute Points for Multiple Assignment Problems
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2006-01-01
An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.
A random optimization approach for inherent optic properties of nearshore waters
NASA Astrophysics Data System (ADS)
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
Creating value: unifying silos into public health business intelligence.
Davidson, Arthur J
2014-01-01
Through September 2014, federal investments in health information technology have been unprecedented, with more than 25 billion dollars in incentive funds distributed to eligible hospitals and providers. Over 85 percent of eligible United States hospitals and 60 percent of eligible providers have used certified electronic health record (EHR) technology and received Meaningful Use incentive funds (HITECH Act1). Certified EHR technology could create new public health (PH) value through novel and rapidly evolving data-use opportunities, never before experienced by PH. The long-standing "silo" approach to funding has fragmented PH programs and departments,2 but the components for integrated business intelligence (i.e., tools and applications to help users make informed decisions) and maximally reuse data are available now. Challenges faced by PH agencies on the road to integration are plentiful, but an emphasis on PH systems and services research (PHSSR) may identify gaps and solutions for the PH community to address. Technology and system approaches to leverage this information explosion to support a transformed health care system and population health are proposed. By optimizing this information opportunity, PH can play a greater role in the learning health system.
NASA Astrophysics Data System (ADS)
Sahraei, S.; Asadzadeh, M.
2017-12-01
Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.
Successful Design of Learning Solutions Being Situation Aware
ERIC Educational Resources Information Center
Niemelä, Pia; Isomöttönen, Ville; Lipponen, Lasse
2016-01-01
Education is increasingly enhanced by technology, and at the same time, the rapid pace of technology innovation and growing demand of consumers introduces challenges for providers of technological learning solutions. This paper investigates Finnish small and medium size companies who either develop or deliver technological solutions for education.…
Interactive multiobjective optimization for anatomy-based three-dimensional HDR brachytherapy
NASA Astrophysics Data System (ADS)
Ruotsalainen, Henri; Miettinen, Kaisa; Palmgren, Jan-Erik; Lahtinen, Tapani
2010-08-01
In this paper, we present an anatomy-based three-dimensional dose optimization approach for HDR brachytherapy using interactive multiobjective optimization (IMOO). In brachytherapy, the goals are to irradiate a tumor without causing damage to healthy tissue. These goals are often conflicting, i.e. when one target is optimized the other will suffer, and the solution is a compromise between them. IMOO is capable of handling multiple and strongly conflicting objectives in a convenient way. With the IMOO approach, a treatment planner's knowledge is used to direct the optimization process. Thus, the weaknesses of widely used optimization techniques (e.g. defining weights, computational burden and trial-and-error planning) can be avoided, planning times can be shortened and the number of solutions to be calculated is small. Further, plan quality can be improved by finding advantageous trade-offs between the solutions. In addition, our approach offers an easy way to navigate among the obtained Pareto optimal solutions (i.e. different treatment plans). When considering a simulation model of clinical 3D HDR brachytherapy, the number of variables is significantly smaller compared to IMRT, for example. Thus, when solving the model, the CPU time is relatively short. This makes it possible to exploit IMOO to solve a 3D HDR brachytherapy optimization problem. To demonstrate the advantages of IMOO, two clinical examples of optimizing a gynecologic cervix cancer treatment plan are presented.
S.A. Bowe; R.L. Smith; D. Earl Kline; Philip A. Araman
2002-01-01
A nationwide survey of advanced scanning and optimizing technology in the hardwood sawmill industry was conducted in the fall of 1999. Three specific hardwood sawmill technologies were examined that included current edger-optimizer systems, future edger-optimizer systems, and future automated grading systems. The objectives of the research were to determine differences...
NASA Astrophysics Data System (ADS)
Zhu, Zhengfan; Gan, Qingbo; Yang, Xin; Gao, Yang
2017-08-01
We have developed a novel continuation technique to solve optimal bang-bang control for low-thrust orbital transfers considering the first-order necessary optimality conditions derived from Lawden's primer vector theory. Continuation on the thrust amplitude is mainly described in this paper. Firstly, a finite-thrust transfer with an ;On-Off-On; thrusting sequence is modeled using a two-impulse transfer as initial solution, and then the thrust amplitude is decreased gradually to find an optimal solution with minimum thrust. Secondly, the thrust amplitude is continued from its minimum value to positive infinity to find the optimal bang-bang control, and a thrust switching principle is employed to determine the control structure by monitoring the variation of the switching function. In the continuation process, a bifurcation of bang-bang control is revealed and the concept of critical thrust is proposed to illustrate this phenomenon. The same thrust switching principle is also applicable to the continuation on other parameters, such as transfer time, orbital phase angle, etc. By this continuation technique, fuel-optimal orbital transfers with variable mission parameters can be found via an automated algorithm, and there is no need to provide an initial guess for the costate variables. Moreover, continuation is implemented in the solution space of bang-bang control that is either optimal or non-optimal, which shows that a desired solution of bang-bang control is obtained via continuation on a single parameter starting from an existing solution of bang-bang control. Finally, numerical examples are presented to demonstrate the effectiveness of the proposed continuation technique. Specifically, this continuation technique provides an approach to find multiple solutions satisfying the first-order necessary optimality conditions to the same orbital transfer problem, and a continuation strategy is presented as a preliminary approach for solving the bang-bang control of many-revolution orbital transfers.
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
An image-based approach to understanding the physics of MR artifacts.
Morelli, John N; Runge, Val M; Ai, Fei; Attenberger, Ulrike; Vu, Lan; Schmeets, Stuart H; Nitz, Wolfgang R; Kirsch, John E
2011-01-01
As clinical magnetic resonance (MR) imaging becomes more versatile and more complex, it is increasingly difficult to develop and maintain a thorough understanding of the physical principles that govern the changing technology. This is particularly true for practicing radiologists, whose primary obligation is to interpret clinical images and not necessarily to understand complex equations describing the underlying physics. Nevertheless, the physics of MR imaging plays an important role in clinical practice because it determines image quality, and suboptimal image quality may hinder accurate diagnosis. This article provides an image-based explanation of the physics underlying common MR imaging artifacts, offering simple solutions for remedying each type of artifact. Solutions that have emerged from recent technologic advances with which radiologists may not yet be familiar are described in detail. Types of artifacts discussed include those resulting from voluntary and involuntary patient motion, magnetic susceptibility, magnetic field inhomogeneities, gradient nonlinearity, standing waves, aliasing, chemical shift, and signal truncation. With an improved awareness and understanding of these artifacts, radiologists will be better able to modify MR imaging protocols so as to optimize clinical image quality, allowing greater confidence in diagnosis. Copyright © RSNA, 2011.
NASA Astrophysics Data System (ADS)
Brudny, J. F.; Pusca, R.; Roisse, H.
2008-08-01
A considerable number of communities throughout the world, most of them isolated, need hybrid energy solutions either for rural electrification or for the reduction of diesel use. Despite several research projects and demonstrations which have been conducted in recent years, wind-diesel technology remains complex and much too costly. Induction generators are the most robust and common for wind energy systems but this option is a serious challenge for electrical regulation. When a wind turbine is used in an off-grid configuration, either continuously or intermittently, precise and robust regulation is difficult to attain. The voltage parameter regulation option, as was experienced at several remote sites (on islands and in the arctic for example), is a safe, reliable and relatively simple technology, but does not optimize the wave quality and creates instabilities. These difficulties are due to the fact that no theory is available to describe the system, due to the inverse nature of the problem. In order to address and solve the problem of the unstable operation of this wind turbine generator, an innovative approach is described, based on a different induction generator single phase equivalent circuit.
NASA Technical Reports Server (NTRS)
Vajingortin, L. D.; Roisman, W. P.
1991-01-01
The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.
A Synergy-Based Optimally Designed Sensing Glove for Functional Grasp Recognition
Ciotti, Simone; Battaglia, Edoardo; Carbonaro, Nicola; Bicchi, Antonio; Tognetti, Alessandro; Bianchi, Matteo
2016-01-01
Achieving accurate and reliable kinematic hand pose reconstructions represents a challenging task. The main reason for this is the complexity of hand biomechanics, where several degrees of freedom are distributed along a continuous deformable structure. Wearable sensing can represent a viable solution to tackle this issue, since it enables a more natural kinematic monitoring. However, the intrinsic accuracy (as well as the number of sensing elements) of wearable hand pose reconstruction (HPR) systems can be severely limited by ergonomics and cost considerations. In this paper, we combined the theoretical foundations of the optimal design of HPR devices based on hand synergy information, i.e., the inter-joint covariation patterns, with textile goniometers based on knitted piezoresistive fabrics (KPF) technology, to develop, for the first time, an optimally-designed under-sensed glove for measuring hand kinematics. We used only five sensors optimally placed on the hand and completed hand pose reconstruction (described according to a kinematic model with 19 degrees of freedom) leveraging upon synergistic information. The reconstructions we obtained from five different subjects were used to implement an unsupervised method for the recognition of eight functional grasps, showing a high degree of accuracy and robustness. PMID:27271621
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta; Kvaternik, Raymond G.
1991-01-01
A NASA/industry rotorcraft structural dynamics program known as Design Analysis Methods for VIBrationS (DAMVIBS) was initiated at Langley Research Center in 1984 with the objective of establishing the technology base needed by the industry for developing an advanced finite-element-based vibrations design analysis capability for airframe structures. As a part of the in-house activities contributing to that program, a study was undertaken to investigate the use of formal, nonlinear programming-based, numerical optimization techniques for airframe vibrations design work. Considerable progress has been made in connection with that study since its inception in 1985. This paper presents a unified summary of the experiences and results of that study. The formulation and solution of airframe optimization problems are discussed. Particular attention is given to describing the implementation of a new computational procedure based on MSC/NASTRAN and CONstrained function MINimization (CONMIN) in a computer program system called DYNOPT for the optimization of airframes subject to strength, frequency, dynamic response, and fatigue constraints. The results from the application of the DYNOPT program to the Bell AH-1G helicopter are presented and discussed.
A Synergy-Based Optimally Designed Sensing Glove for Functional Grasp Recognition.
Ciotti, Simone; Battaglia, Edoardo; Carbonaro, Nicola; Bicchi, Antonio; Tognetti, Alessandro; Bianchi, Matteo
2016-06-02
Achieving accurate and reliable kinematic hand pose reconstructions represents a challenging task. The main reason for this is the complexity of hand biomechanics, where several degrees of freedom are distributed along a continuous deformable structure. Wearable sensing can represent a viable solution to tackle this issue, since it enables a more natural kinematic monitoring. However, the intrinsic accuracy (as well as the number of sensing elements) of wearable hand pose reconstruction (HPR) systems can be severely limited by ergonomics and cost considerations. In this paper, we combined the theoretical foundations of the optimal design of HPR devices based on hand synergy information, i.e., the inter-joint covariation patterns, with textile goniometers based on knitted piezoresistive fabrics (KPF) technology, to develop, for the first time, an optimally-designed under-sensed glove for measuring hand kinematics. We used only five sensors optimally placed on the hand and completed hand pose reconstruction (described according to a kinematic model with 19 degrees of freedom) leveraging upon synergistic information. The reconstructions we obtained from five different subjects were used to implement an unsupervised method for the recognition of eight functional grasps, showing a high degree of accuracy and robustness.
Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking
Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng
2017-01-01
Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243
A novel method for energy harvesting simulation based on scenario generation
NASA Astrophysics Data System (ADS)
Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min
2018-06-01
Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.
OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. BOETTCHER; A. PERCUS
2000-08-01
We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less
2013-01-01
Background The main aim of China’s Health Care System Reform was to help the decision maker find the optimal solution to China’s institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China’s health care system, and it could efficiently collect the data for determining the optimal solution to China’s institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts’ views into various optimal solutions to this problem under the support of this pilot system. Methods After the general framework of China’s institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. Results The market-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the doctors’ point of view; the traditional government’s regulation-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the pharmacists’ point of view, the hospital administrators’ point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China’s institutional problem of health care provider selection from the nurses’ point of view, the point of view of officials in medical insurance agencies, and the health care researchers’ point of view. Conclusions The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts’ views into various optimal solutions to China’s institutional problem of health care provider selection. PMID:23557082
Glick, Meir; Rayan, Anwar; Goldblum, Amiram
2002-01-01
The problem of global optimization is pivotal in a variety of scientific fields. Here, we present a robust stochastic search method that is able to find the global minimum for a given cost function, as well as, in most cases, any number of best solutions for very large combinatorial “explosive” systems. The algorithm iteratively eliminates variable values that contribute consistently to the highest end of a cost function's spectrum of values for the full system. Values that have not been eliminated are retained for a full, exhaustive search, allowing the creation of an ordered population of best solutions, which includes the global minimum. We demonstrate the ability of the algorithm to explore the conformational space of side chains in eight proteins, with 54 to 263 residues, to reproduce a population of their low energy conformations. The 1,000 lowest energy solutions are identical in the stochastic (with two different seed numbers) and full, exhaustive searches for six of eight proteins. The others retain the lowest 141 and 213 (of 1,000) conformations, depending on the seed number, and the maximal difference between stochastic and exhaustive is only about 0.15 Kcal/mol. The energy gap between the lowest and highest of the 1,000 low-energy conformers in eight proteins is between 0.55 and 3.64 Kcal/mol. This algorithm offers real opportunities for solving problems of high complexity in structural biology and in other fields of science and technology. PMID:11792838
Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan
2002-07-01
In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.
The research of a solution on locating optimally a station for seismic disasters rescue in a city
NASA Astrophysics Data System (ADS)
Yao, Qing-Lin
1995-02-01
When the stations for seismic disasters rescue in future or the similars are designed on a network of communication line, the general absolute center of a graph needs to be solved to reduce the requirements in the number of stations and running parameters and to establish an optimal station in a sense distribution of the rescue arrival time by the way of locating optimally the stations. The existing solution on this problem was proposed by Edward (1978) in which, however, there is serious deviation. In this article, the work of Edward (1978) is developed in both formula and figure, more correct solution is proposed and proved. Then the result from the newer solution is contrasted with that from the older one in a instance about locating optimally the station for seismic disasters rescue.
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
NASA Astrophysics Data System (ADS)
Mabood, Fazal; Hussain, Z.; Haq, H.; Arian, M. B.; Boqué, R.; Khan, K. M.; Hussain, K.; Jabeen, F.; Hussain, J.; Ahmed, M.; Alharasi, A.; Naureen, Z.; Hussain, H.; Khan, A.; Perveen, S.
2016-01-01
A new UV-Visible spectroscopic method assisted with microwave for the determination of glucose in pharmaceutical formulations was developed. In this study glucose solutions were oxidized by ammonium molybdate in the presence of microwave energy and reacted with aniline to produce a colored solution. Optimum conditions of the reaction including wavelength, temperature, and pH of the medium and relative concentration ratio of the reactants were investigated. It was found that the optimal wavelength for the reaction is 610 nm, the optimal reaction time is 80 s, the optimal reaction temperature is 160 °C, the optimal reaction pH is 4, and the optimal concentration ratio aniline/ammonium molybdate solution was found to be 1:1. The limits of detection and quantification of the method are 0.82 and 2.75 ppm for glucose solution, respectively. The use of microwaves improved the speed of the method while the use of aniline improved the sensitivity of the method by shifting the wavelength.
NASA Astrophysics Data System (ADS)
Chai, Runqi; Savvaris, Al; Tsourdos, Antonios
2016-06-01
In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.
Rapid Prototyping Technology for Manufacturing GTE Turbine Blades
NASA Astrophysics Data System (ADS)
Balyakin, A. V.; Dobryshkina, E. M.; Vdovin, R. A.; Alekseev, V. P.
2018-03-01
The conventional approach to manufacturing turbine blades by investment casting is expensive and time-consuming, as it takes a lot of time to make geometrically precise and complex wax patterns. Turbine blade manufacturing in pilot production can be sped up by accelerating the casting process while keeping the geometric precision of the final product. This paper compares the rapid prototyping method (casting the wax pattern composition into elastic silicone molds) to the conventional technology. Analysis of the size precision of blade casts shows that silicon-mold casting features sufficient geometric precision. Thus, this method for making wax patterns can be a cost-efficient solution for small-batch or pilot production of turbine blades for gas-turbine units (GTU) and gas-turbine engines (GTE). The paper demonstrates how additive technology and thermographic analysis can speed up the cooling of wax patterns in silicone molds. This is possible at an optimal temperature and solidification time, which make the process more cost-efficient while keeping the geometric quality of the final product.
Drug-eluting stents. Insights from invasive imaging technologies.
Honda, Yasuhiro
2009-08-01
Drug-eluting stents (DES) represent a revolutionary technology in their unique ability to provide both mechanical and biological solutions simultaneously to the target lesion. As a result of biological effects from the pharmacological agents and interaction of DES components with the arterial wall, considerable differences exist between DES and conventional bare metal stents (BMS), yet some of the old lessons learned in the BMS era remain clinically significant. In this context, contrast angiography provides very little information about in vivo device properties and their biomechanical effects on the arterial wall. In contrast, current catheter-based imaging tools, such as intravascular ultrasound, optical coherence tomography, and intracoronary angioscopy can offer unique insights into DES through direct assessment of the device and treated vessel in the clinical setting. This article reviews these insights from current DES with particular focus on performance and safety characteristics as well as discussing an optimal deployment technique, based upon findings obtained through the use of the invasive imaging technologies.
Methods for optimizing solutions when considering group arguments by team of experts
NASA Astrophysics Data System (ADS)
Chernyi, Sergei; Budnik, Vlad
2017-11-01
The article is devoted to methods of expert evaluation. The technology of expert evaluation is presented from the standpoint of precedent structures. In this paper, an aspect of the mathematical basis for constructing a component of decision analysis is considered. In fact, this approach leaves out any identification of their knowledge and skills of simulating organizational and manufacturing situations and taking efficient managerial decisions; it doesn't enable any identification and assessment of their knowledge on the basis of multi-informational and least loss-making methods and information technologies. Hence the problem is to research and develop a methodology for systemic identification of professional problem-focused knowledge acquired by employees operating adaptive automated systems of training management (AASTM operators), which shall also further the theory and practice of the intelligence-related aspects thereof.