Sample records for optimizing production inputs

  1. Reliability of system for precise cold forging

    NASA Astrophysics Data System (ADS)

    Krušič, Vid; Rodič, Tomaž

    2017-07-01

    The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.

  2. Cocoa based agroforestry: An economic perspective in resource scarcity conflict era

    NASA Astrophysics Data System (ADS)

    Jumiyati, S.; Arsyad, M.; Rajindra; Pulubuhu, D. A. T.; Hadid, A.

    2018-05-01

    Agricultural development towards food self-sufficiency based on increasing production alone has caused the occurrence of environmental disasters that are the impact of the exploitation of natural resources resulting in the scarcity of resources. This paper describes the optimization of land area, revenue, cost (production inputs), income and use of production input based on economic and ecological aspects. In order to sustainability farming by integrating environmental and economic consideration can be made through farmers’ decision making with the goal of optimizing revenue based on cost optimization through cocoa based agroforestry model in order to cope with a resource conflict resolution.

  3. Irrigation timing and volume affects growth of container grown maples

    USDA-ARS?s Scientific Manuscript database

    Container nursery production requires large inputs of water and nutrients but frequently irrigation inputs exceed plant demand and lack application precision or are not applied at optimal times for plant production. The results from this research can assist producers in developing irrigation manage...

  4. Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data

    NASA Astrophysics Data System (ADS)

    Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.

  5. Maximize, minimize or target - optimization for a fitted response from a designed experiment

    DOE PAGES

    Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu

    2016-04-01

    One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.

  6. Optimizing model: insemination, replacement, seasonal production, and cash flow.

    PubMed

    DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A

    1992-03-01

    Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.

  7. Optimizing anaerobic soil disinfestation for fresh market tomato production: Nematode and weed control, yield, and fruit quality

    USDA-ARS?s Scientific Manuscript database

    Anaerobic soil disinfestation (ASD) has potential as an alternative to chemical-fumigation for controlling soilborne pathogens and pests. Previously, control of nutsedge was sub-optimal and the quantity of inputs for commercial production was an impediment to adoption. Field studies were conducted i...

  8. First parity evaluation of peak milk yield for range cows developed in the same ecophysiological system but receiving different concentrations of harvested feed inputs

    USDA-ARS?s Scientific Manuscript database

    Reduction of harvested feed inputs during heifer development could optimize range livestock production and improve economic feasibility. The objective for this two year study was to measure milk production (kg/d) and milk constituent concentrations (g/d) for 16 primiparous beef cows each year that w...

  9. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.

  10. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  11. PM(10) emission forecasting using artificial neural networks and genetic algorithm input variable optimization.

    PubMed

    Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A

    2013-01-15

    This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Attributing Crop Production in the United States Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Ma, Y.; Zhang, Z.; Pan, B.

    2017-12-01

    Crop production plays key role in supporting life, economy and shaping environment. It is on one hand influenced by natural factors including precipitation, temperature, energy, and on the other hand shaped by the investment of fertilizers, pesticides and human power. Successful attributing of crop production to different factors can help optimize resources and improve productivity. Based on the meteorological records from National Center for Environmental Prediction and state-wise crop production related data provided by the United States Department of Agriculture Economic Research Service, an artificial neural network was constructed to connect crop production with precipitation and temperature anormlies, capital input, labor input, energy input, pesticide consumption and fertilizer consumption. Sensitivity analysis were carried out to attribute their specific influence on crop production for each grid. Results confirmed that the listed factors can generally determine the crop production. Different state response differently to the pertubation of predictands. Their spatial distribution is visulized and discussed.

  13. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    PubMed

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  15. The Bi-Directional Prediction of Carbon Fiber Production Using a Combination of Improved Particle Swarm Optimization and Support Vector Machine.

    PubMed

    Xiao, Chuncai; Hao, Kuangrong; Ding, Yongsheng

    2014-12-30

    This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM) and improved particle swarm optimization (IPSO) algorithm (SVM-IPSO). In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN), the basic particle swarm optimization (PSO) method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO) method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.

  16. Optimization benefits analysis in production process of fabrication components

    NASA Astrophysics Data System (ADS)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  17. First Parity Evaluation of Body Condition, Weight, and Blood Beta-Hydroxybutyrate During Lactation of Range Cows Developed in the Same Ecophysiological System but Receiving Different Harvested Feed Inputs

    USDA-ARS?s Scientific Manuscript database

    Reduction of harvested feed inputs during heifer development could optimize range livestock production and improve economic feasibility for producers. The objective of this study was to measure body condition and weight as well as blood beta-hydroxybutyrate (BHB) concentrations for primiparous beef ...

  18. First parity evaluation of body condition, weight, and blood beta-hydroxybutyrate during lactation of range cows developed in the same ecophysiological system but receiving different harvested feed inputs

    USDA-ARS?s Scientific Manuscript database

    Reduction of harvested feed inputs during heifer development could optimize range livestock production and improve economic feasibility for producers. The objective of this study was to measure body condition and weight as well as blood beta-hydroxybutyrate (BHB) concentrations for primiparous beef ...

  19. QTL examination of a bi-parental mapping population segregating for “short-stature” in hop (Humulus lupulus L.)

    USDA-ARS?s Scientific Manuscript database

    Increasing labor costs and reduced labor pools for hop production have resulted in the necessity to develop strategies to improve efficiency and automate hop production and harvest. One solution for reducing labor inputs is the use and production of “low-trellis” hop varieties optimized for mechani...

  20. Microbial inoculants for optimized plant nutrients use in integrated pest and input management systems

    USDA-ARS?s Scientific Manuscript database

    The use of fertilizers and pesticides have greatly increased agricultural productivity over the past few decades. However, there is still an ongoing search for additional or alternate tools that can proffer agricultural sustainability and meet the needs of profitability and greater food production f...

  1. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. African crop yield reductions due to increasingly unbalanced Nitrogen and Phosphorus consumption

    NASA Astrophysics Data System (ADS)

    van der Velde, Marijn; Folberth, Christian; Balkovič, Juraj; Ciais, Philippe; Fritz, Steffen; Janssens, Ivan A.; Obersteiner, Michael; See, Linda; Skalský, Rastislav; Xiong, Wei; Peñuealas, Josep

    2014-05-01

    The impact of soil nutrient depletion on crop production has been known for decades, but robust assessments of the impact of increasingly unbalanced nitrogen (N) and phosphorus (P) application rates on crop production are lacking. Here, we use crop response functions based on 741 FAO maize crop trials and EPIC crop modeling across Africa to examine maize yield deficits resulting from unbalanced N:P applications under low, medium, and high input scenarios, for past (1975), current, and future N:P mass ratios of respectively, 1:0.29, 1:0.15, and 1:0.05. At low N inputs (10 kg/ha), current yield deficits amount to 10% but will increase up to 27% under the assumed future N:P ratio, while at medium N inputs (50 kg N/ha), future yield losses could amount to over 40%. The EPIC crop model was then used to simulate maize yields across Africa. The model results showed relative median future yield reductions at low N inputs of 40%, and 50% at medium and high inputs, albeit with large spatial variability. Dominant low-quality soils such as Ferralsols, which are strongly adsorbing P, and Arenosols with a low nutrient retention capacity, are associated with a strong yield decline, although Arenosols show very variable crop yield losses at low inputs. Optimal N:P ratios, i.e. those where the lowest amount of applied P produces the highest yield (given N input) where calculated with EPIC to be as low as 1:0.5. Finally, we estimated the additional P required given current N inputs, and given N inputs that would allow Africa to close yield gaps (ca. 70%). At current N inputs, P consumption would have to increase 2.3-fold to be optimal, and to increase 11.7-fold to close yield gaps. The P demand to overcome these yield deficits would provide a significant additional pressure on current global extraction of P resources.

  3. Ring rolling process simulation for geometry optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  4. Effect of alkaline microwaving pretreatment on anaerobic digestion and biogas production of swine manure.

    PubMed

    Yu, Tao; Deng, Yihuan; Liu, Hongyu; Yang, Chunping; Wu, Bingwen; Zeng, Guangming; Lu, Li; Nishimura, Fumitake

    2017-05-10

    Microwave assisted with alkaline (MW-A) condition was applied in the pretreatment of swine manure, and the effect of the pretreatment on anaerobic treatment and biogas production was evaluated in this study. The two main microwaving (MW) parameters, microwaving power and reaction time, were optimized for the pretreatment. Response surface methodology (RSM) was used to investigate the effect of alkaline microwaving process for manure pretreatment at various values of pH and energy input. Results showed that the manure disintegration degree was maximized of 63.91% at energy input of 54 J/g and pH of 12.0, and variance analysis indicated that pH value played a more important role in the pretreatment than in energy input. Anaerobic digestion results demonstrated that MW-A pretreatment not only significantly increased cumulative biogas production, but also shortened the duration for a stable biogas production rate. Therefore, the alkaline microwaving pretreatment could become an alternative process for effective treatment of swine manure.

  5. Application of experimental design for the optimization of artificial neural network-based water quality model: a case study of dissolved oxygen prediction.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-04-01

    This paper presents an application of experimental design for the optimization of artificial neural network (ANN) for the prediction of dissolved oxygen (DO) content in the Danube River. The aim of this research was to obtain a more reliable ANN model that uses fewer monitoring records, by simultaneous optimization of the following model parameters: number of monitoring sites, number of historical monitoring data (expressed in years), and number of input water quality parameters used. Box-Behnken three-factor at three levels experimental design was applied for simultaneous spatial, temporal, and input variables optimization of the ANN model. The prediction of DO was performed using a feed-forward back-propagation neural network (BPNN), while the selection of most important inputs was done off-model using multi-filter approach that combines a chi-square ranking in the first step with a correlation-based elimination in the second step. The contour plots of absolute and relative error response surfaces were utilized to determine the optimal values of design factors. From the contour plots, two BPNN models that cover entire Danube flow through Serbia are proposed: an upstream model (BPNN-UP) that covers 8 monitoring sites prior to Belgrade and uses 12 inputs measured in the 7-year period and a downstream model (BPNN-DOWN) which covers 9 monitoring sites and uses 11 input parameters measured in the 6-year period. The main difference between the two models is that BPNN-UP utilizes inputs such as BOD, P, and PO 4 3- , which is in accordance with the fact that this model covers northern part of Serbia (Vojvodina Autonomous Province) which is well-known for agricultural production and extensive use of fertilizers. Both models have shown very good agreement between measured and predicted DO (with R 2  ≥ 0.86) and demonstrated that they can effectively forecast DO content in the Danube River.

  6. Evolutionary Bi-objective Optimization for Bulldozer and Its Blade in Soil Cutting

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak; Barakat, Nada

    2018-02-01

    An evolutionary optimization approach is adopted in this paper for simultaneously achieving the economic and productive soil cutting. The economic aspect is defined by minimizing the power requirement from the bulldozer, and the soil cutting is made productive by minimizing the time of soil cutting. For determining the power requirement, two force models are adopted from the literature to quantify the cutting force on the blade. Three domain-specific constraints are also proposed, which are limiting the power from the bulldozer, limiting the maximum force on the bulldozer blade and achieving the desired production rate. The bi-objective optimization problem is solved using five benchmark multi-objective evolutionary algorithms and one classical optimization technique using the ɛ-constraint method. The Pareto-optimal solutions are obtained with the knee-region. Further, the post-optimal analysis is performed on the obtained solutions to decipher relationships among the objectives and decision variables. Such relationships are later used for making guidelines for selecting the optimal set of input parameters. The obtained results are then compared with the experiment results from the literature that show a close agreement among them.

  7. The Environment and Directed Technical Change†

    PubMed Central

    Acemoglu, Daron; Aghion, Philippe; Bursztyn, Leonardo

    2015-01-01

    This paper introduces endogenous and directed technical change in a growth model with environmental constraints. The final good is produced from “dirty” and “clean” inputs. We show that: (i) when inputs are sufficiently substitutable, sustainable growth can be achieved with temporary taxes/subsidies that redirect innovation toward clean inputs; (ii) optimal policy involves both “carbon taxes” and research subsidies, avoiding excessive use of carbon taxes; (iii) delay in intervention is costly, as it later necessitates a longer transition phase with slow growth; and (iv) use of an exhaustible resource in dirty input production helps the switch to clean innovation under laissez-faire. (JEL O33, O44, Q30, Q54, Q56, Q58) PMID:26719595

  8. The Environment and Directed Technical Change.

    PubMed

    Acemoglu, Daron; Aghion, Philippe; Bursztyn, Leonardo; Hemous, David

    2012-02-01

    This paper introduces endogenous and directed technical change in a growth model with environmental constraints. The final good is produced from "dirty" and "clean" inputs. We show that: (i) when inputs are sufficiently substitutable, sustainable growth can be achieved with temporary taxes/subsidies that redirect innovation toward clean inputs; (ii) optimal policy involves both "carbon taxes" and research subsidies, avoiding excessive use of carbon taxes; (iii) delay in intervention is costly, as it later necessitates a longer transition phase with slow growth; and (iv) use of an exhaustible resource in dirty input production helps the switch to clean innovation under laissez-faire. (JEL O33, O44, Q30, Q54, Q56, Q58).

  9. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  10. Reproducibility, Controllability, and Optimization of Lenr Experiments

    NASA Astrophysics Data System (ADS)

    Nagel, David J.

    2006-02-01

    Low-energy nuclear reaction (LENR) measurements are significantly and increasingly reproducible. Practical control of the production of energy or materials by LENR has yet to be demonstrated. Minimization of costly inputs and maximization of desired outputs of LENR remain for future developments.

  11. Nitrogen balance dynamics during 2000-2010 in the Yangtze River Basin croplands, with special reference to the relative contributions of cropland area and synthetic fertilizer N application rate changes

    PubMed Central

    Wang, Lijuan; Zhao, He; Robinson, Brian E.

    2017-01-01

    With the increases of cropland area and fertilizer nitrogen (N) application rate, general N balance characteristics in regional agroecosystems have been widely documented. However, few studies have quantitatively analyzed the drivers of spatial changes in the N budget. We constructed a mass balance model of the N budget at the soil surface using a database of county-level agricultural statistics to analyze N input, output, and proportional contribution of various factors to the overall N input changes in croplands during 2000–2010 in the Yangtze River Basin, the largest basin and the main agricultural production region in China. Over the period investigated, N input increased by 9%. Of this 87% was from fertilizer N input. In the upper and middle reaches of the basin, the increased synthetic fertilizer N application rate accounted for 84% and 76% of the N input increase, respectively, mainly due to increased N input in the cropland that previously had low synthetic fertilizer N application rate. In lower reaches of the basin, mainly due to urbanization, the decrease in cropland area and synthetic fertilizer N application rate nearly equally contributed to decreases in N input. Quantifying spatial N inputs can provide critical managerial information needed to optimize synthetic fertilizer N application rate and monitor the impacts of urbanization on agricultural production, helping to decrease agricultural environment risk and maintain sustainable agricultural production in different areas. PMID:28678841

  12. Nitrogen balance dynamics during 2000-2010 in the Yangtze River Basin croplands, with special reference to the relative contributions of cropland area and synthetic fertilizer N application rate changes.

    PubMed

    Wang, Lijuan; Zheng, Hua; Zhao, He; Robinson, Brian E

    2017-01-01

    With the increases of cropland area and fertilizer nitrogen (N) application rate, general N balance characteristics in regional agroecosystems have been widely documented. However, few studies have quantitatively analyzed the drivers of spatial changes in the N budget. We constructed a mass balance model of the N budget at the soil surface using a database of county-level agricultural statistics to analyze N input, output, and proportional contribution of various factors to the overall N input changes in croplands during 2000-2010 in the Yangtze River Basin, the largest basin and the main agricultural production region in China. Over the period investigated, N input increased by 9%. Of this 87% was from fertilizer N input. In the upper and middle reaches of the basin, the increased synthetic fertilizer N application rate accounted for 84% and 76% of the N input increase, respectively, mainly due to increased N input in the cropland that previously had low synthetic fertilizer N application rate. In lower reaches of the basin, mainly due to urbanization, the decrease in cropland area and synthetic fertilizer N application rate nearly equally contributed to decreases in N input. Quantifying spatial N inputs can provide critical managerial information needed to optimize synthetic fertilizer N application rate and monitor the impacts of urbanization on agricultural production, helping to decrease agricultural environment risk and maintain sustainable agricultural production in different areas.

  13. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  14. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D P; Ritts, W D; Wharton, S

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less

  15. Least-cost input mixtures of water and nitrogen for photosynthesis.

    PubMed

    Wright, Ian J; Reich, Peter B; Westoby, Mark

    2003-01-01

    In microeconomics, a standard framework is used for determining the optimal input mix for a two-input production process. Here we adapt this framework for understanding the way plants use water and nitrogen (N) in photosynthesis. The least-cost input mixture for generating a given output depends on the relative cost of procuring and using nitrogen versus water. This way of considering the issue integrates concepts such as water-use efficiency and photosynthetic nitrogen-use efficiency into the more inclusive objective of optimizing the input mix for a given situation. We explore the implications of deploying alternative combinations of leaf nitrogen concentration and stomatal conductance to water, focusing on comparing hypothetical species occurring in low- versus high-humidity habitats. We then present data from sites in both the United States and Australia and show that low-rainfall species operate with substantially higher leaf N concentration per unit leaf area. The extra protein reflected in higher leaf N concentration is associated with a greater drawdown of internal CO2, such that low-rainfall species achieve higher photosynthetic rates at a given stomatal conductance. This restraint of transpirational water use apparently counterbalances the multiple costs of deploying high-nitrogen leaves.

  16. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    NASA Astrophysics Data System (ADS)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  17. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  18. Nitrogen and harvest impact on warm-season grasses biomass yield

    USDA-ARS?s Scientific Manuscript database

    Perennial warm-season grasses have drawn interest as bioenergy feedstocks due to their high productivity with minimal amounts of inputs while producing multiple environmental benefits. Nitrogen (N) fertility and harvest timing are critical management practices when optimizing biomass yield of these ...

  19. On the Design of a Fuzzy Logic-Based Control System for Freeze-Drying Processes.

    PubMed

    Fissore, Davide

    2016-12-01

    This article is focused on the design of a fuzzy logic-based control system to optimize a drug freeze-drying process. The goal of the system is to keep product temperature as close as possible to the threshold value of the formulation being processed, without trespassing it, in such a way that product quality is not jeopardized and the sublimation flux is maximized. The method involves the measurement of product temperature and a set of rules that have been obtained through process simulation with the goal to obtain a unique set of rules for products with very different characteristics. Input variables are the difference between the temperature of the product and the threshold value, the difference between the temperature of the heating fluid and that of the product, and the rate of change of product temperature. The output variables are the variation of the temperature of the heating fluid and the pressure in the drying chamber. The effect of the starting value of the input variables and of the control interval has been investigated, thus resulting in the optimal configuration of the control system. Experimental investigation carried out in a pilot-scale freeze-dryer has been carried out to validate the proposed system. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  20. Precision oncology using a limited number of cells: optimization of whole genome amplification products for sequencing applications.

    PubMed

    Sho, Shonan; Court, Colin M; Winograd, Paul; Lee, Sangjun; Hou, Shuang; Graeber, Thomas G; Tseng, Hsian-Rong; Tomlinson, James S

    2017-07-01

    Sequencing analysis of circulating tumor cells (CTCs) enables "liquid biopsy" to guide precision oncology strategies. However, this requires low-template whole genome amplification (WGA) that is prone to errors and biases from uneven amplifications. Currently, quality control (QC) methods for WGA products, as well as the number of CTCs needed for reliable downstream sequencing, remain poorly defined. We sought to define strategies for selecting and generating optimal WGA products from low-template input as it relates to their potential applications in precision oncology strategies. Single pancreatic cancer cells (HPAF-II) were isolated using laser microdissection. WGA was performed using multiple displacement amplification (MDA), multiple annealing and looping based amplification (MALBAC) and PicoPLEX. Quality of amplified DNA products were assessed using a multiplex/RT-qPCR based method that evaluates for 8-cancer related genes and QC-scores were assigned. We utilized this scoring system to assess the impact of de novo modifications to the WGA protocol. WGA products were subjected to Sanger sequencing, array comparative genomic hybridization (aCGH) and next generation sequencing (NGS) to evaluate their performances in respective downstream analyses providing validation of the QC-score. Single-cell WGA products exhibited a significant sample-to-sample variability in amplified DNA quality as assessed by our 8-gene QC assay. Single-cell WGA products that passed the pre-analysis QC had lower amplification bias and improved aCGH/NGS performance metrics when compared to single-cell WGA products that failed the QC. Increasing the number of cellular input resulted in improved QC-scores overall, but a resultant WGA product that consistently passed the QC step required a starting cellular input of at least 20-cells. Our modified-WGA protocol effectively reduced this number, achieving reproducible high-quality WGA products from ≥5-cells as a starting template. A starting cellular input of 5 to 10-cells amplified using the modified-WGA achieved aCGH and NGS results that closely matched that of unamplified, batch genomic DNA. The modified-WGA protocol coupled with the 8-gene QC serve as an effective strategy to enhance the quality of low-template WGA reactions. Furthermore, a threshold number of 5-10 cells are likely needed for a reliable WGA reaction and product with high fidelity to the original starting template.

  1. Numerical simulation of multi-rifled tube drawing - finding proper feedstock dimensions and tool geometry

    NASA Astrophysics Data System (ADS)

    Bella, P.; Buček, P.; Ridzoň, M.; Mojžiš, M.; Parilák, L.'

    2017-02-01

    Production of multi-rifled seamless steel tubes is quite a new technology in Železiarne Podbrezová. Therefore, a lot of technological questions emerges (process technology, input feedstock dimensions, material flow during drawing, etc.) Pilot experiments to fine tune the process cost a lot of time and energy. For this, numerical simulation would be an alternative solution for achieving optimal parameters in production technology. This would reduce the number of experiments needed, lowering the overall costs of development. However, to claim the numerical results to be relevant it is necessary to verify them against the actual plant trials. Searching for optimal input feedstock dimension for drawing of multi-rifled tube with dimensions Ø28.6 mm × 6.3 mm is what makes the main topic of this paper. As a secondary task, effective position of the plug - die couple has been solved via numerical simulation. Comparing the calculated results with actual numbers from plant trials a good agreement was observed.

  2. Balancing the health workforce: breaking down overall technical change into factor technical change for labour-an empirical application to the Dutch hospital industry.

    PubMed

    Blank, Jos L T; van Hulst, Bart L

    2017-02-17

    Well-trained, well-distributed and productive health workers are crucial for access to high-quality, cost-effective healthcare. Because neither a shortage nor a surplus of health workers is wanted, policymakers use workforce planning models to get information on future labour markets and adjust policies accordingly. A neglected topic of workforce planning models is productivity growth, which has an effect on future demand for labour. However, calculating productivity growth for specific types of input is not as straightforward as it seems. This study shows how to calculate factor technical change (FTC) for specific types of input. The paper first theoretically derives FTCs from technical change in a consistent manner. FTC differs from a ratio of output and input, in that it deals with the multi-input, multi-output character of the production process in the health sector. Furthermore, it takes into account substitution effects between different inputs. An application of the calculation of FTCs is given for the Dutch hospital industry for the period 2003-2011. A translog cost function is estimated and used to calculate technical change and FTC for individual inputs, especially specific labour inputs. The results show that technical change increased by 2.8% per year in Dutch hospitals during 2003-2011. FTC differs amongst the various inputs. The FTC of nursing personnel increased by 3.2% per year, implying that fewer nurses were needed to let demand meet supply on the labour market. Sensitivity analyses show consistent results for the FTC of nurses. Productivity growth, especially of individual outputs, is a neglected topic in workforce planning models. FTC is a productivity measure that is consistent with technical change and accounts for substitution effects. An application to the Dutch hospital industry shows that the FTC of nursing personnel outpaced technical change during 2003-2011. The optimal input mix changed, resulting in fewer nurses being needed to let demand meet supply on the labour market. Policymakers should consider using more detailed and specific data on the nature of technical change when forecasting the future demand for health workers.

  3. Understanding and engineering beneficial plant–microbe interactions: plant growth promotion in energy crops

    PubMed Central

    Farrar, Kerrie; Bryant, David; Cope-Selby, Naomi

    2014-01-01

    Plant production systems globally must be optimized to produce stable high yields from limited land under changing and variable climates. Demands for food, animal feed, and feedstocks for bioenergy and biorefining applications, are increasing with population growth, urbanization and affluence. Low-input, sustainable, alternatives to petrochemical-derived fertilizers and pesticides are required to reduce input costs and maintain or increase yields, with potential biological solutions having an important role to play. In contrast to crops that have been bred for food, many bioenergy crops are largely undomesticated, and so there is an opportunity to harness beneficial plant–microbe relationships which may have been inadvertently lost through intensive crop breeding. Plant–microbe interactions span a wide range of relationships in which one or both of the organisms may have a beneficial, neutral or negative effect on the other partner. A relatively small number of beneficial plant–microbe interactions are well understood and already exploited; however, others remain understudied and represent an untapped reservoir for optimizing plant production. There may be near-term applications for bacterial strains as microbial biopesticides and biofertilizers to increase biomass yield from energy crops grown on land unsuitable for food production. Longer term aims involve the design of synthetic genetic circuits within and between the host and microbes to optimize plant production. A highly exciting prospect is that endosymbionts comprise a unique resource of reduced complexity microbial genomes with adaptive traits of great interest for a wide variety of applications. PMID:25431199

  4. Fuzzy logic control and optimization system

    DOEpatents

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  5. Energy Productivity of the High Velocity Algae Raceway Integrated Design (ARID-HV)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attalah, Said; Waller, Peter M.; Khawam, George

    The original Algae Raceway Integrated Design (ARID) raceway was an effective method to increase algae culture temperature in open raceways. However, the energy input was high and flow mixing was poor. Thus, the High Velocity Algae Raceway Integrated Design (ARID-HV) raceway was developed to reduce energy input requirements and improve flow mixing in a serpentine flow path. A prototype ARID-HV system was installed in Tucson, Arizona. Based on algae growth simulation and hydraulic analysis, an optimal ARID-HV raceway was designed, and the electrical energy input requirement (kWh ha-1 d-1) was calculated. An algae growth model was used to compare themore » productivity of ARIDHV and conventional raceways. The model uses a pond surface energy balance to calculate water temperature as a function of environmental parameters. Algae growth and biomass loss are calculated based on rate constants during day and night, respectively. A 10 year simulation of DOE strain 1412 (Chlorella sorokiniana) showed that the ARID-HV raceway had significantly higher production than a conventional raceway for all months of the year in Tucson, Arizona. It should be noted that this difference is species and climate specific and is not observed in other climates and with other algae species. The algae growth model results and electrical energy input evaluation were used to compare the energy productivity (algae production rate/energy input) of the ARID-HV and conventional raceways for Chlorella sorokiniana in Tucson, Arizona. The energy productivity of the ARID-HV raceway was significantly greater than the energy productivity of a conventional raceway for all months of the year.« less

  6. Quality by design approach: application of artificial intelligence techniques of tablets manufactured by direct compression.

    PubMed

    Aksu, Buket; Paradkar, Anant; de Matas, Marcel; Ozer, Ozgen; Güneri, Tamer; York, Peter

    2012-12-01

    The publication of the International Conference of Harmonization (ICH) Q8, Q9, and Q10 guidelines paved the way for the standardization of quality after the Food and Drug Administration issued current Good Manufacturing Practices guidelines in 2003. "Quality by Design", mentioned in the ICH Q8 guideline, offers a better scientific understanding of critical process and product qualities using knowledge obtained during the life cycle of a product. In this scope, the "knowledge space" is a summary of all process knowledge obtained during product development, and the "design space" is the area in which a product can be manufactured within acceptable limits. To create the spaces, artificial neural networks (ANNs) can be used to emphasize the multidimensional interactions of input variables and to closely bind these variables to a design space. This helps guide the experimental design process to include interactions among the input variables, along with modeling and optimization of pharmaceutical formulations. The objective of this study was to develop an integrated multivariate approach to obtain a quality product based on an understanding of the cause-effect relationships between formulation ingredients and product properties with ANNs and genetic programming on the ramipril tablets prepared by the direct compression method. In this study, the data are generated through the systematic application of the design of experiments (DoE) principles and optimization studies using artificial neural networks and neurofuzzy logic programs.

  7. Multiple response optimization for higher dimensions in factors and responses

    DOE PAGES

    Lu, Lu; Chapman, Jessica L.; Anderson-Cook, Christine M.

    2016-07-19

    When optimizing a product or process with multiple responses, a two-stage Pareto front approach is a useful strategy to evaluate and balance trade-offs between different estimated responses to seek optimum input locations for achieving the best outcomes. After objectively eliminating non-contenders in the first stage by looking for a Pareto front of superior solutions, graphical tools can be used to identify a final solution in the second subjective stage to compare options and match with user priorities. Until now, there have been limitations on the number of response variables and input factors that could effectively be visualized with existing graphicalmore » summaries. We present novel graphical tools that can be more easily scaled to higher dimensions, in both the input and response spaces, to facilitate informed decision making when simultaneously optimizing multiple responses. A key aspect of these graphics is that the potential solutions can be flexibly sorted to investigate specific queries, and that multiple aspects of the solutions can be simultaneously considered. As a result, recommendations are made about how to evaluate the impact of the uncertainty associated with the estimated response surfaces on decision making with higher dimensions.« less

  8. Validation of optimization strategies using the linear structured production chains

    NASA Astrophysics Data System (ADS)

    Kusiak, Jan; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz

    2017-06-01

    Different optimization strategies applied to sequence of several stages of production chains were validated in this paper. Two benchmark problems described by ordinary differential equations (ODEs) were considered. A water tank and a passive CR-RC filter were used as the exemplary objects described by the first and the second order differential equations, respectively. Considered in the work optimization problems serve as the validators of strategies elaborated by the Authors. However, the main goal of research is selection of the best strategy for optimization of two real metallurgical processes which will be investigated in an on-going projects. The first problem will be the oxidizing roasting process of zinc sulphide concentrate where the sulphur from the input concentrate should be eliminated and the minimal concentration of sulphide sulphur in the roasted products has to be achieved. Second problem will be the lead refining process consisting of three stages: roasting to the oxide, oxide reduction to metal and the oxidizing refining. Strategies, which appear the most effective in considered benchmark problems will be candidates for optimization of the mentioned above industrial processes.

  9. A Systems Model Comparing Australian and Chinese HRM Education

    ERIC Educational Resources Information Center

    Davidson, Paul; Tsakissiris, Jane; Guo, Yuanyuan

    2017-01-01

    This paper explores the implications for learning design in HRM education in the 21st century. An open systems perspective is used to argue the importance of establishing productive relationships between academia, professional associations, regulators and industry (resource inputs) to support the creation of optimal learning environments (the…

  10. Wastewater polishing by a channelized macrophyte-dominated wetland and anaerobic digestion of the harvested phytomass

    USDA-ARS?s Scientific Manuscript database

    : Constructed wetlands (CW) offer a mechanism to meet regulatory standards for wastewater treatment while minimizing energy inputs. To optimize CW wastewater polishing activities and investigate integration of CW with energy production from anaerobic digestion we constructed a pair of three-tier ch...

  11. Infrared Retrievals of Ice Cloud Properties and Uncertainties with an Optimal Estimation Retrieval Method

    NASA Astrophysics Data System (ADS)

    Wang, C.; Platnick, S. E.; Meyer, K.; Zhang, Z.

    2014-12-01

    We developed an optimal estimation (OE)-based method using infrared (IR) observations to retrieve ice cloud optical thickness (COT), cloud effective radius (CER), and cloud top height (CTH) simultaneously. The OE-based retrieval is coupled with a fast IR radiative transfer model (RTM) that simulates observations of different sensors, and corresponding Jacobians in cloudy atmospheres. Ice cloud optical properties are calculated using the MODIS Collection 6 (C6) ice crystal habit (severely roughened hexagonal column aggregates). The OE-based method can be applied to various IR space-borne and airborne sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and the enhanced MODIS Airborne Simulator (eMAS), by optimally selecting IR bands with high information content. Four major error sources (i.e., the measurement error, fast RTM error, model input error, and pre-assumed ice crystal habit error) are taken into account in our OE retrieval method. We show that measurement error and fast RTM error have little impact on cloud retrievals, whereas errors from the model input and pre-assumed ice crystal habit significantly increase retrieval uncertainties when the cloud is optically thin. Comparisons between the OE-retrieved ice cloud properties and other operational cloud products (e.g., the MODIS C6 and CALIOP cloud products) are shown.

  12. Cleanser, Detergent, Personal Care Product, and Pretreatment Evaluation

    NASA Technical Reports Server (NTRS)

    Adam, Niklas; Carrier, Chris; Vega, Leticia; Casteel, Michael; Verostko, chuck; Pickering, Karen

    2011-01-01

    The purpose of the Cleanser, Detergent, Personal Care Product, and Pretreatment Evaluation & Selection task is to identify the optimal combination of personal hygiene products, crew activities, and pretreatment strategies to provide the crew with sustainable life support practices and a comfortable habitat. Minimal energy, mass, and crew time inputs are desired to recycle wastewater during long duration missions. This document will provide a brief background on the work this past year supporting the ELS Distillation Comparison Test, issues regarding use of the hygiene products originally chosen for the test, methods and results used to select alternative products, and lessons learned from testing.

  13. Cleanser, Detergent, Personal Care Product Pretreatment Evaluation

    NASA Technical Reports Server (NTRS)

    Adam, Niklas

    2010-01-01

    The purpose of the Cleanser, Detergent, Personal Care Product, and Pretreatment Evaluation & Selection task is to identify the optimal combination of personal hygiene products, crew activities, and pretreatment strategies to provide the crew with sustainable life support practices and a comfortable habitat. Minimal energy, mass, and crew time inputs are desired to recycle wastewater during long duration missions. This document will provide a brief background on the work this past year supporting the ELS Distillation Comparison Test, issues regarding use of the hygiene products originally chosen for the test, methods and results used to select alternative products, and lessons learned from testing.

  14. Economic growth and carbon emission control

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenyu

    The question about whether environmental improvement is compatible with continued economic growth remains unclear and requires further study in a specific context. This study intends to provide insight on the potential for carbon emissions control in the absence of international agreement, and connect the empirical analysis with theoretical framework. The Chinese electricity generation sector is used as a case study to demonstrate the problem. Both social planner and private problems are examined to derive the conditions that define the optimal level of production and pollution. The private problem will be demonstrated under the emission regulation using an emission tax, an input tax and an abatement subsidy respectively. The social optimal emission flow is imposed into the private problem. To provide tractable analytical results, a Cobb-Douglas type production function is used to describe the joint production process of the desired output and undesired output (i.e., electricity and emissions). A modified Hamiltonian approach is employed to solve the system and the steady state solutions are examined for policy implications. The theoretical analysis suggests that the ratio of emissions to desired output (refer to 'emission factor'), is a function of productive capital and other parameters. The finding of non-constant emission factor shows that reducing emissions without further cutting back the production of desired outputs is feasible under some circumstances. Rather than an ad hoc specification, the optimal conditions derived from our theoretical framework are used to examine the relationship between desired output and emission level. Data comes from the China Statistical Yearbook and China Electric Power Yearbook and provincial information of electricity generation for the year of 1993-2003 are used to estimate the Cobb-Douglas type joint production by the full information maximum likelihood (FIML) method. The empirical analysis shed light on the optimal policies of emissions control required for achieving the social goal in a private context. The results suggest that the efficiency of abatement technology is crucial for the timing of executing the emission tax. And emission tax is preferred to an input tax, as long as the detection of emissions is not costly and abatement technology is efficient. Keywords: Economic growth, Carbon emission, Power generation, Joint production, China

  15. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  16. 'Optimal' vortex rings and aquatic propulsion mechanisms.

    PubMed Central

    Linden, P. F.; Turner, J. S.

    2004-01-01

    Fishes swim by flapping their tail and other fins. Other sea creatures, such as squid and salps, eject fluid intermittently as a jet. We discuss the fluid mechanics behind these propulsion mechanisms and show that these animals produce optimal vortex rings, which give the maximum thrust for a given energy input. We show that fishes optimize both their steady swimming efficiency and their ability to accelerate and turn by producing an individual optimal ring with each flap of the tail or fin. Salps produce vortex rings directly by ejecting a volume of fluid through a rear orifice, and these are also optimal. An important implication of this paper is that the repetition of vortex production is not necessary for an individual vortex to have the 'optimal' characteristics. PMID:15156924

  17. Rocket ascent G-limited moment-balanced optimization program (RAGMOP)

    NASA Technical Reports Server (NTRS)

    Lyons, J. T.; Woltosz, W. S.; Abercrombie, G. E.; Gottlieb, R. G.

    1972-01-01

    This document describes the RAGMOP (Rocket Ascent G-limited Momentbalanced Optimization Program) computer program for parametric ascent trajectory optimization. RAGMOP computes optimum polynomial-form attitude control histories, launch azimuth, engine burn-time, and gross liftoff weight for space shuttle type vehicles using a search-accelerated, gradient projection parameter optimization technique. The trajectory model available in RAGMOP includes a rotating oblate earth model, the option of input wind tables, discrete and/or continuous throttling for the purposes of limiting the thrust acceleration and/or the maximum dynamic pressure, limitation of the structural load indicators (the product of dynamic pressure with angle-of-attack and sideslip angle), and a wide selection of intermediate and terminal equality constraints.

  18. Organic supplemental nitrogen sources for field corn production after a hairy vetch cover crop

    USDA-ARS?s Scientific Manuscript database

    The combined use of legume cover crops and animal byproduct organic amendments could provide agronomic and environmental benefits to organic farmers by increasing corn grain yield while optimizing N and P inputs. To test this hypothesis we conducted a two-year field study and a laboratory soil incu...

  19. Nitrogen input inventory in the Nooksack-Abbotsford-Sumas Transboundary Region: Key component of an international nitrogen management study

    EPA Science Inventory

    Nitrogen (N) is an essential biological element, so optimizing N use for food production while minimizing the release of N and co-pollutants to the environment is an important challenge. The Nooksack-Abbotsford-Sumas Transboundary (NAS) Region, spanning a portion of the western...

  20. Nitrogen input inventory in the Nooksack-Abbotsford-Sumas Transboundary Region: Key component of an international nitrogen management study.

    EPA Science Inventory

    Background/Question/Methods: Nitrogen (N) is an essential biological element, so optimizing N use for food production while minimizing the release of N and co-pollutants to the environment is an important challenge. The Nooksack-lower Fraser Valley, spanning a portion of the w...

  1. Inverse optimal design of input-to-state stabilisation for affine nonlinear systems with input delays

    NASA Astrophysics Data System (ADS)

    Cai, Xiushan; Meng, Lingxin; Zhang, Wei; Liu, Leipo

    2018-03-01

    We establish robustness of the predictor feedback control law to perturbations appearing at the system input for affine nonlinear systems with time-varying input delay and additive disturbances. Furthermore, it is shown that it is inverse optimal with respect to a differential game problem. All of the stability and inverse optimality proofs are based on the infinite-dimensional backstepping transformation and an appropriate Lyapunov functional. A single-link manipulator subject to input delays and disturbances is given to illustrate the validity of the proposed method.

  2. A Computational approach in optimizing process parameters of GTAW for SA 106 Grade B steel pipes using Response surface methodology

    NASA Astrophysics Data System (ADS)

    Sumesh, A.; Sai Ramnadh, L. V.; Manish, P.; Harnath, V.; Lakshman, V.

    2016-09-01

    Welding is one of the most common metal joining techniques used in industry for decades. As in the global manufacturing scenario the products should be more cost effective. Therefore the selection of right process with optimal parameters will help the industry in minimizing their cost of production. SA 106 Grade B steel has a wide application in Automobile chassis structure, Boiler tubes and pressure vessels industries. Employing central composite design the process parameters for Gas Tungsten Arc Welding was optimized. The input parameters chosen were weld current, peak current and frequency. The joint tensile strength was the response considered in this study. Analysis of variance was performed to determine the statistical significance of the parameters and a Regression analysis was performed to determine the effect of input parameters over the response. From the experiment the maximum tensile strength obtained was 95 KN reported for a weld current of 95 Amp, frequency of 50 Hz and peak current of 100 Amp. With an aim of maximizing the joint strength using Response optimizer a target value of 100 KN is selected and regression models were optimized. The output results are achievable with a Weld current of 62.6148 Amp, Frequency of 23.1821 Hz, and Peak current of 65.9104 Amp. Using Die penetration test the weld joints were also classified in to 2 categories as good weld and weld with defect. This will also help in getting a defect free joint when welding is performed using GTAW process.

  3. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  4. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  5. Productivity growth, case mix and optimal size of hospitals. A 16-year study of the Norwegian hospital sector.

    PubMed

    Anthun, Kjartan Sarheim; Kittelsen, Sverre Andreas Campbell; Magnussen, Jon

    2017-04-01

    This paper analyses productivity growth in the Norwegian hospital sector over a period of 16 years, 1999-2014. This period was characterized by a large ownership reform with subsequent hospital reorganizations and mergers. We describe how technological change, technical productivity, scale efficiency and the estimated optimal size of hospitals have evolved during this period. Hospital admissions were grouped into diagnosis-related groups using a fixed-grouper logic. Four composite outputs were defined and inputs were measured as operating costs. Productivity and efficiency were estimated with bootstrapped data envelopment analyses. Mean productivity increased by 24.6% points from 1999 to 2014, an average annual change of 1.5%. There was a substantial growth in productivity and hospital size following the ownership reform. After the reform (2003-2014), average annual growth was <0.5%. There was no evidence of technical change. Estimated optimal size was smaller than the actual size of most hospitals, yet scale efficiency was high even after hospital mergers. However, the later hospital mergers have not been followed by similar productivity growth as around time of the reform. This study addresses the issues of both cross-sectional and longitudinal comparability of case mix between hospitals, and thus provides a framework for future studies. The study adds to the discussion on optimal hospital size. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Effective algorithm for solving complex problems of production control and of material flows control of industrial enterprise

    NASA Astrophysics Data System (ADS)

    Mezentsev, Yu A.; Baranova, N. V.

    2018-05-01

    A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.

  7. Prospects from agroecology and industrial ecology for animal production in the 21st century.

    PubMed

    Dumont, B; Fortun-Lamothe, L; Jouven, M; Thomas, M; Tichit, M

    2013-06-01

    Agroecology and industrial ecology can be viewed as complementary means for reducing the environmental footprint of animal farming systems: agroecology mainly by stimulating natural processes to reduce inputs, and industrial ecology by closing system loops, thereby reducing demand for raw materials, lowering pollution and saving on waste treatment. Surprisingly, animal farming systems have so far been ignored in most agroecological thinking. On the basis of a study by Altieri, who identified the key ecological processes to be optimized, we propose five principles for the design of sustainable animal production systems: (i) adopting management practices aiming to improve animal health, (ii) decreasing the inputs needed for production, (iii) decreasing pollution by optimizing the metabolic functioning of farming systems, (iv) enhancing diversity within animal production systems to strengthen their resilience and (v) preserving biological diversity in agroecosystems by adapting management practices. We then discuss how these different principles combine to generate environmental, social and economic performance in six animal production systems (ruminants, pigs, rabbits and aquaculture) covering a long gradient of intensification. The two principles concerning economy of inputs and reduction of pollution emerged in nearly all the case studies, a finding that can be explained by the economic and regulatory constraints affecting animal production. Integrated management of animal health was seldom mobilized, as alternatives to chemical drugs have only recently been investigated, and the results are not yet transferable to farming practices. A number of ecological functions and ecosystem services (recycling of nutrients, forage yield, pollination, resistance to weed invasion, etc.) are closely linked to biodiversity, and their persistence depends largely on maintaining biological diversity in agroecosystems. We conclude that the development of such ecology-based alternatives for animal production implies changes in the positions adopted by technicians and extension services, researchers and policymakers. Animal production systems should not only be considered holistically, but also in the diversity of their local and regional conditions. The ability of farmers to make their own decisions on the basis of the close monitoring of system performance is most important to ensure system sustainability.

  8. Delineation of site-specific management units in a saline region at the Venice Lagoon margin, Italy, using soil reflectance and apparent electrical conductivity

    USDA-ARS?s Scientific Manuscript database

    Site-specific crop management utilizes site-specific management units (SSMUs) to apply inputs when, where, and in the amount needed to increase food productivity, optimize resource utilization, increase profitability, and reduce detrimental environmental impacts. It is the objective of this study to...

  9. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  10. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  11. Time-dependent fermentation control strategies for enhancing synthesis of marine bacteriocin 1701 using artificial neural network and genetic algorithm.

    PubMed

    Peng, Jiansheng; Meng, Fanmei; Ai, Yuncan

    2013-06-01

    The artificial neural network (ANN) and genetic algorithm (GA) were combined to optimize the fermentation process for enhancing production of marine bacteriocin 1701 in a 5-L-stirred-tank. Fermentation time, pH value, dissolved oxygen level, temperature and turbidity were used to construct a "5-10-1" ANN topology to identify the nonlinear relationship between fermentation parameters and the antibiotic effects (shown as in inhibition diameters) of bacteriocin 1701. The predicted values by the trained ANN model were coincided with the observed ones (the coefficient of R(2) was greater than 0.95). As the fermentation time was brought in as one of the ANN input nodes, fermentation parameters could be optimized by stages through GA, and an optimal fermentation process control trajectory was created. The production of marine bacteriocin 1701 was significantly improved by 26% under the guidance of fermentation control trajectory that was optimized by using of combined ANN-GA method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Finding the right compromise between productivity and environmental efficiency on high input tropical dairy farms: a case study.

    PubMed

    Berre, David; Blancard, Stéphane; Boussemart, Jean-Philippe; Leleu, Hervé; Tillard, Emmanuel

    2014-12-15

    This study focused on the trade-off between milk production and its environmental impact on greenhouse gas (GHG) emissions and nitrogen surplus in a high input tropical system. We first identified the objectives of the three main stakeholders in the dairy sector (farmers, a milk cooperative and environmentalists). The main aim of the farmers and cooperative's scenarios was to increase milk production without additional environmental deterioration but with the possibility of increasing the inputs for the cooperative. The environmentalist's objective was to reduce environmental deterioration. Second, we designed a sustainable intensification scenario combining maximization of milk production and minimization of environmental impacts. Third, the objectives for reducing the eco-inefficiency of dairy systems in Reunion Island were incorporated in a framework for activity analysis, which was used to model a technological approach with desirable and undesirable outputs. Of the four scenarios, the sustainable intensification scenario produced the best results, with a potential decrease of 238 g CO2-e per liter of milk (i.e. a reduction of 13.93% compared to the current level) and a potential 7.72 L increase in milk produced for each kg of nitrogen surplus (i.e. an increase of 16.45% compared to the current level). These results were based on the best practices observed in Reunion Island and optimized manure management, crop-livestock interactions, and production processes. Our results also showed that frontier efficiency analysis can shed new light on the challenge of developing sustainable intensification in high input tropical dairy systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Optimizing noise control strategy in a forging workshop.

    PubMed

    Razavi, Hamideh; Ramazanifar, Ehsan; Bagherzadeh, Jalal

    2014-01-01

    In this paper, a computer program based on a genetic algorithm is developed to find an economic solution for noise control in a forging workshop. Initially, input data, including characteristics of sound sources, human exposure, abatement techniques, and production plans are inserted into the model. Using sound pressure levels at working locations, the operators who are at higher risk are identified and picked out for the next step. The program is devised in MATLAB such that the parameters can be easily defined and changed for comparison. The final results are structured into 4 sections that specify an appropriate abatement method for each operator and machine, minimum allowance time for high-risk operators, required damping material for enclosures, and minimum total cost of these treatments. The validity of input data in addition to proper settings in the optimization model ensures the final solution is practical and economically reasonable.

  14. Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1993-01-01

    The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.

  15. Modelling and Optimization Studies on a Novel Lipase Production by Staphylococcus arlettae through Submerged Fermentation

    PubMed Central

    Chauhan, Mamta; Chauhan, Rajinder Singh; Garlapati, Vijay Kumar

    2013-01-01

    Microbial enzymes from extremophilic regions such as hot spring serve as an important source of various stable and valuable industrial enzymes. The present paper encompasses the modeling and optimization approach for production of halophilic, solvent, tolerant, and alkaline lipase from Staphylococcus arlettae through response surface methodology integrated nature inspired genetic algorithm. Response surface model based on central composite design has been developed by considering the individual and interaction effects of fermentation conditions on lipase production through submerged fermentation. The validated input space of response surface model (with R 2 value of 96.6%) has been utilized for optimization through genetic algorithm. An optimum lipase yield of 6.5 U/mL has been obtained using binary coded genetic algorithm predicted conditions of 9.39% inoculum with the oil concentration of 10.285% in 2.99 hrs using pH of 7.32 at 38.8°C. This outcome could contribute to introducing this extremophilic lipase (halophilic, solvent, and tolerant) to industrial biotechnology sector and will be a probable choice for different food, detergent, chemical, and pharmaceutical industries. The present work also demonstrated the feasibility of statistical design tools integration with computational tools for optimization of fermentation conditions for maximum lipase production. PMID:24455210

  16. Dynamic Pressure Microphones

    NASA Astrophysics Data System (ADS)

    Werner, E.

    In 1876, Alexander Graham Bell described his first telephone with a microphone using magnetic induction to convert the voice input into an electric output signal. The basic principle led to a variety of designs optimized for different needs, from hearing impaired users to singers or broadcast announcers. From the various sound pressure versions, only the moving coil design is still in mass production for speech and music application.

  17. Toolpath Strategy and Optimum Combination of Machining Parameter during Pocket Mill Process of Plastic Mold Steels Material

    NASA Astrophysics Data System (ADS)

    Wibowo, Y. T.; Baskoro, S. Y.; Manurung, V. A. T.

    2018-02-01

    Plastic based products spread all over the world in many aspects of life. The ability to substitute other materials is getting stronger and wider. The use of plastic materials increases and become unavoidable. Plastic based mass production requires injection process as well Mold. The milling process of plastic mold steel material was done using HSS End Mill cutting tool that is widely used in a small and medium enterprise for the reason of its ability to be re sharpened and relatively inexpensive. Study on the effect of the geometry tool states that it has an important effect on the quality improvement. Cutting speed, feed rate, depth of cut and radii are input parameters beside to the tool path strategy. This paper aims to investigate input parameter and cutting tools behaviors within some different tool path strategy. For the reason of experiments efficiency Taguchi method and ANOVA were used. Response studied is surface roughness and cutting behaviors. By achieving the expected quality, no more additional process is required. Finally, the optimal combination of machining parameters will deliver the expected roughness and of course totally reduced cutting time. However actually, SMEs do not optimally use this data for cost reduction.

  18. A similarity score-based two-phase heuristic approach to solve the dynamic cellular facility layout for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Singh, Surya Prakash

    2017-11-01

    The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.

  19. Implementation of fuzzy logic to determining selling price of products in a local corporate chain store

    NASA Astrophysics Data System (ADS)

    Kristiana, S. P. D.

    2017-12-01

    Corporate chain store is one type of retail industries companies that are developing growing rapidly in Indonesia. The competition between retail companies is very tight, so retailer companies should evaluate its performance continuously in order to survive. The selling price of products is one of the essential attributes and gets attention of many consumers where it’s used to evaluate the performance of the industry. This research aimed to determine optimal selling price of product with considering cost factors, namely purchase price of the product from supplier, holding costs, and transportation costs. Fuzzy logic approach is used in data processing with MATLAB software. Fuzzy logic is selected to solve the problem because this method can consider complexities factors. The result is a model of determination of the optimal selling price by considering three cost factors as inputs in the model. Calculating MAPE and model prediction ability for some products are used as validation and verification where the average value is 0.0525 for MAPE and 94.75% for prediction ability. The conclusion is this model can predict the selling price of up to 94.75%, so it can be used as tools for the corporate chain store in particular to determine the optimal selling price for its products.

  20. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  1. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.

  2. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  3. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  4. Comparative study of thermochemical processes for hydrogen production from biomass fuels.

    PubMed

    Biagini, Enrico; Masoni, Lorenzo; Tognotti, Leonardo

    2010-08-01

    Different thermochemical configurations (gasification, combustion, electrolysis and syngas separation) are studied for producing hydrogen from biomass fuels. The aim is to provide data for the production unit and the following optimization of the "hydrogen chain" (from energy source selection to hydrogen utilization) in the frame of the Italian project "Filiera Idrogeno". The project focuses on a regional scale (Tuscany, Italy), renewable energies and automotive hydrogen. Decentred and small production plants are required to solve the logistic problems of biomass supply and meet the limited hydrogen infrastructures. Different options (gasification with air, oxygen or steam/oxygen mixtures, combustion, electrolysis) and conditions (varying the ratios of biomass and gas input) are studied by developing process models with uniform hypothesis to compare the results. Results obtained in this work concern the operating parameters, process efficiencies, material and energetic needs and are fundamental to optimize the entire hydrogen chain. Copyright 2010 Elsevier Ltd. All rights reserved.

  5. Simulative method for determining the optimal operating conditions for a cooling plate for lithium-ion battery cell modules

    NASA Astrophysics Data System (ADS)

    Smith, Joshua; Hinterberger, Michael; Hable, Peter; Koehler, Juergen

    2014-12-01

    Extended battery system lifetime and reduced costs are essential to the success of electric vehicles. An effective thermal management strategy is one method of enhancing system lifetime increasing vehicle range. Vehicle-typical space restrictions favor the minimization of battery thermal management system (BTMS) size and weight, making their production and subsequent vehicle integration extremely difficult and complex. Due to these space requirements, a cooling plate as part of a water-glycerol cooling circuit is commonly implemented. This paper presents a computational fluid dynamics (CFD) model and multi-objective analysis technique for determining the thermal effect of coolant flow rate and inlet temperature in a cooling plate-at a range of vehicle operating conditions-on a battery system, thereby providing a dynamic input for one-dimensional models. Traditionally, one-dimensional vehicular thermal management system models assume a static heat input from components such as a battery system: as a result, the components are designed for a set coolant input (flow rate and inlet temperature). Such a design method is insufficient for dynamic thermal management models and control strategies, thereby compromising system efficiency. The presented approach allows for optimal BMTS design and integration in the vehicular coolant circuit.

  6. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  7. Using an artificial neural network to predict the optimal conditions for enzymatic hydrolysis of apple pomace.

    PubMed

    Gama, Repson; Van Dyk, J Susan; Burton, Mike H; Pletschke, Brett I

    2017-06-01

    The enzymatic degradation of lignocellulosic biomass such as apple pomace is a complex process influenced by a number of hydrolysis conditions. Predicting optimal conditions, including enzyme and substrate concentration, temperature and pH can improve conversion efficiency. In this study, the production of sugar monomers from apple pomace using commercial enzyme preparations, Celluclast 1.5L, Viscozyme L and Novozyme 188 was investigated. A limited number of experiments were carried out and then analysed using an artificial neural network (ANN) to model the enzymatic hydrolysis process. The ANN was used to simulate the enzymatic hydrolysis process for a range of input variables and the optimal conditions were successfully selected as was indicated by the R 2 value of 0.99 and a small MSE value. The inputs for the ANN were substrate loading, enzyme loading, temperature, initial pH and a combination of these parameters, while release profiles of glucose and reducing sugars were the outputs. Enzyme loadings of 0.5 and 0.2 mg/g substrate and a substrate loading of 30% were optimal for glucose and reducing sugar release from apple pomace, respectively, resulting in concentrations of 6.5 g/L glucose and 28.9 g/L reducing sugars. Apple pomace hydrolysis can be successfully carried out based on the predicted optimal conditions from the ANN.

  8. The economics of project analysis: Optimal investment criteria and methods of study

    NASA Technical Reports Server (NTRS)

    Scriven, M. C.

    1979-01-01

    Insight is provided toward the development of an optimal program for investment analysis of project proposals offering commercial potential and its components. This involves a critique of economic investment criteria viewed in relation to requirements of engineering economy analysis. An outline for a systems approach to project analysis is given Application of the Leontief input-output methodology to analysis of projects involving multiple processes and products is investigated. Effective application of elements of neoclassical economic theory to investment analysis of project components is demonstrated. Patterns of both static and dynamic activity levels are incorporated.

  9. Bioprocessing of wheat bran for the production of lignocellulolytic enzyme cocktail by Cotylidia pannosa under submerged conditions.

    PubMed

    Sharma, Deepika; Garlapat, Vijay Kumar; Goel, Gunjan

    2016-04-02

    Characterization and production of efficient lignocellulytic enzyme cocktails for biomass conversion is the need for biofuel industry. The present investigation reports the modeling and optimization studies of lignocellulolytic enzyme cocktail production by Cotylidia pannosa under submerged conditions. The predominant enzyme activities of cellulase, xylanase and laccase were produced in the cocktail through submerged conditions using wheat bran as a substrate. A central composite design approach was utilized to model the production process using temperature, pH, incubation time and agitation as input variables with the goal of optimizing the output variables namely cellulase, xylanase and laccase activities. The effect of individual, square and interaction terms on cellulase, xylanase and laccase activities were depicted through the non-linear regression equations with significant R(2) and P-values. An optimized value of 20 U/ml, 17 U/ml and 13 U/ml of cellulase, xylanase and laccase activities, respectively, were obtained with a media pH of 5.0 in 77 h at 31C, 140 rpm using wheatbran as a substrate. Overall, the present study introduces a fungal strain, capable of producing lignocellulolytic enzyme cocktail for subsequent applications in biofuel industry.

  10. Bioprocessing of wheat bran for the production of lignocellulolytic enzyme cocktail by Cotylidia pannosa under submerged conditions

    PubMed Central

    Sharma, Deepika; Garlapat, Vijay Kumar; Goel, Gunjan

    2016-01-01

    ABSTRACT Characterization and production of efficient lignocellulytic enzyme cocktails for biomass conversion is the need for biofuel industry. The present investigation reports the modeling and optimization studies of lignocellulolytic enzyme cocktail production by Cotylidia pannosa under submerged conditions. The predominant enzyme activities of cellulase, xylanase and laccase were produced in the cocktail through submerged conditions using wheat bran as a substrate. A central composite design approach was utilized to model the production process using temperature, pH, incubation time and agitation as input variables with the goal of optimizing the output variables namely cellulase, xylanase and laccase activities. The effect of individual, square and interaction terms on cellulase, xylanase and laccase activities were depicted through the non-linear regression equations with significant R2 and P-values. An optimized value of 20 U/ml, 17 U/ml and 13 U/ml of cellulase, xylanase and laccase activities, respectively, were obtained with a media pH of 5.0 in 77 h at 31C, 140 rpm using wheatbran as a substrate. Overall, the present study introduces a fungal strain, capable of producing lignocellulolytic enzyme cocktail for subsequent applications in biofuel industry. PMID:26941214

  11. Contingency Contractor Optimization Phase 3 Sustainment Database Design Document - Contingency Contractor Optimization Tool - Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frazier, Christopher Rawls; Durfee, Justin David; Bandlow, Alisa

    The Contingency Contractor Optimization Tool – Prototype (CCOT-P) database is used to store input and output data for the linear program model described in [1]. The database allows queries to retrieve this data and updating and inserting new input data.

  12. Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.

    2013-08-01

    Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.

  13. Production of Chitin from Penaeus vannamei By-Products to Pilot Plant Scale Using a Combination of Enzymatic and Chemical Processes and Subsequent Optimization of the Chemical Production of Chitosan by Response Surface Methodology.

    PubMed

    Vázquez, José A; Ramos, Patrícia; Mirón, Jesús; Valcarcel, Jesus; Sotelo, Carmen G; Pérez-Martín, Ricardo I

    2017-06-16

    The waste generated from shrimp processing contains valuable materials such as protein, carotenoids, and chitin. The present study describes a process at pilot plant scale to recover chitin from the cephalothorax of Penaeus vannamei using mild conditions. The application of a sequential enzymatic-acid-alkaline treatment yields 30% chitin of comparable purity to commercial sources. Effluents from the process are rich in protein and astaxanthin, and represent inputs for further by-product recovery. As a last step, chitin is deacetylated to produce chitosan; the optimal conditions are established by applying a response surface methodology (RSM). Under these conditions, deacetylation reaches 92% as determined by Proton Nuclear Magnetic Resonance (¹H-NMR), and the molecular weight (Mw) of chitosan is estimated at 82 KDa by gel permeation chromatography (GPC). Chitin and chitosan microstructures are characterized by Scanning Electron Microscopy (SEM).

  14. Econ's optimal decision model of wheat production and distribution-documentation

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The report documents the computer programs written to implement the ECON optical decision model. The programs were written in APL, an extremely compact and powerful language particularly well suited to this model, which makes extensive use of matrix manipulations. The algorithms used are presented and listings of and descriptive information on the APL programs used are given. Possible changes in input data are also given.

  15. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  16. Experimental Validation of Strategy for the Inverse Estimation of Mechanical Properties and Coefficient of Friction in Flat Rolling

    NASA Astrophysics Data System (ADS)

    Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker

    2017-08-01

    Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.

  17. Sculpt test problem analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweetser, John David

    2013-10-01

    This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less

  18. Computer model for refinery operations with emphasis on jet fuel production. Volume 1: Program description

    NASA Technical Reports Server (NTRS)

    Dunbar, D. N.; Tunnah, B. G.

    1978-01-01

    A FORTRAN computer program is described for predicting the flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuel of varying end point and hydrogen content specifications. The program has provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case.

  19. Computer model for refinery operations with emphasis on jet fuel production. Volume 3: Detailed systems and programming documentation

    NASA Technical Reports Server (NTRS)

    Dunbar, D. N.; Tunnah, B. G.

    1978-01-01

    The FORTRAN computing program predicts flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuels of varying end point and hydrogen content specifications. The program has a provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case.

  20. Optimal allocation of land and water resources to achieve Water, Energy and Food Security in the upper Blue Nile basin

    NASA Astrophysics Data System (ADS)

    Allam, M.; Eltahir, E. A. B.

    2017-12-01

    Rapid population growth, hunger problems, increasing energy demands, persistent conflicts between the Nile basin riparian countries and the potential impacts of climate change highlight the urgent need for the conscious stewardship of the upper Blue Nile (UBN) basin resources. This study develops a framework for the optimal allocation of land and water resources to agriculture and hydropower production in the UBN basin. The framework consists of three optimization models that aim to: (a) provide accurate estimates of the basin water budget, (b) allocate land and water resources optimally to agriculture, and (c) allocate water to agriculture and hydropower production, and investigate trade-offs between them. First, a data assimilation procedure for data-scarce basins is proposed to deal with data limitations and produce estimates of the hydrologic components that are consistent with the principles of mass and energy conservation. Second, the most representative topography and soil properties datasets are objectively identified and used to delineate the agricultural potential in the basin. The agricultural potential is incorporated into a land-water allocation model that maximizes the net economic benefits from rain-fed agriculture while allowing for enhancing the soils from one suitability class to another to increase agricultural productivity in return for an investment in soil inputs. The optimal agricultural expansion is expected to reduce the basin flow by 7.6 cubic kilometres, impacting downstream countries. The optimization framework is expanded to include hydropower production. This study finds that allocating water to grow rain-fed teff in the basin is more profitable than allocating water for hydropower production. Optimal operation rules for the Grand Ethiopian Renaissance dam (GERD) are identified to maximize annual hydropower generation while achieving a relatively uniform monthly production rate. Trade-offs between agricultural expansion and hydropower generation are analysed in an attempt to define cooperation scenarios that would achieve win-win outcomes for all riparian countries.

  1. Declining spatial efficiency of global cropland nitrogen allocation

    NASA Astrophysics Data System (ADS)

    Mueller, Nathaniel D.; Lassaletta, Luis; Runck, Bryan C.; Billen, Gilles; Garnier, Josette; Gerber, James S.

    2017-02-01

    Efficiently allocating nitrogen (N) across space maximizes crop productivity for a given amount of N input and reduces N losses to the environment. Here we quantify changes in the global spatial efficiency of cropland N use by calculating historical trade-off frontiers relating N inputs to possible N yield assuming efficient allocation. Time series cropland N budgets from 1961 to 2009 characterize the evolution of N input-yield response functions across 12 regions and are the basis for constructing trade-off frontiers. Improvements in agronomic technology have substantially increased cropping system yield potentials and expanded N-driven crop production possibilities. However, we find that these gains are compromised by the declining spatial efficiency of N use across regions. Since the start of the Green Revolution, N inputs and yields have moved farther from the optimal frontier over time; in recent years (1994-2009), global N surplus has grown to a value that is 69% greater than what is possible with efficient N allocation between regions. To reflect regional pollution and agricultural development goals, we construct scenarios that restrict reallocation, finding that these changes only slightly decrease potential gains in nitrogen use efficiency. Our results are inherently conservative due to the regional unit of analysis, meaning a larger potential exists than is quantified here for cross-scale policies to promote spatially efficient N use.

  2. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  3. Improved production of tannase by Klebsiella pneumoniae using Indian gooseberry leaves under submerged fermentation using Taguchi approach.

    PubMed

    Kumar, Mukesh; Singh, Amrinder; Beniwal, Vikas; Salar, Raj Kumar

    2016-12-01

    Tannase (tannin acyl hydrolase E.C 3.1.1.20) is an inducible, largely extracellular enzyme that causes the hydrolysis of ester and depside bonds present in various substrates. Large scale industrial application of this enzyme is very limited owing to its high production costs. In the present study, cost effective production of tannase by Klebsiella pneumoniae KP715242 was studied under submerged fermentation using different tannin rich agro-residues like Indian gooseberry leaves (Phyllanthus emblica), Black plum leaves (Syzygium cumini), Eucalyptus leaves (Eucalyptus glogus) and Babul leaves (Acacia nilotica). Among all agro-residues, Indian gooseberry leaves were found to be the best substrate for tannase production under submerged fermentation. Sequential optimization approach using Taguchi orthogonal array screening and response surface methodology was adopted to optimize the fermentation variables in order to enhance the enzyme production. Eleven medium components were screened primarily by Taguchi orthogonal array design to identify the most contributing factors towards the enzyme production. The four most significant contributing variables affecting tannase production were found to be pH (23.62 %), tannin extract (20.70 %), temperature (20.33 %) and incubation time (14.99 %). These factors were further optimized with central composite design using response surface methodology. Maximum tannase production was observed at 5.52 pH, 39.72 °C temperature, 91.82 h of incubation time and 2.17 % tannin content. The enzyme activity was enhanced by 1.26 fold under these optimized conditions. The present study emphasizes the use of agro-residues as a potential substrate with an aim to lower down the input costs for tannase production so that the enzyme could be used proficiently for commercial purposes.

  4. A decision support system using analytical hierarchy process (AHP) for the optimal environmental reclamation of an open-pit mine

    NASA Astrophysics Data System (ADS)

    Bascetin, A.

    2007-04-01

    The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.

  5. Theory of optimal information transmission in E. coli chemotaxis pathway

    NASA Astrophysics Data System (ADS)

    Micali, Gabriele; Endres, Robert G.

    Bacteria live in complex microenvironments where they need to make critical decisions fast and reliably. These decisions are inherently affected by noise at all levels of the signaling pathway, and cells are often modeled as an input-output device that transmits extracellular stimuli (input) to internal proteins (channel), which determine the final behavior (output). Increasing the amount of transmitted information between input and output allows cells to better infer extracellular stimuli and respond accordingly. However, in contrast to electronic devices, the separation into input, channel, and output is not always clear in biological systems. Output might feed back into the input, and the channel, made by proteins, normally interacts with the input. Furthermore, a biological channel is affected by mutations and can change under evolutionary pressure. Here, we present a novel approach to maximize information transmission: given cell-external and internal noise, we analytically identify both input distributions and input-output relations that optimally transmit information. Using E. coli chemotaxis as an example, we conclude that its pathway is compatible with an optimal information transmission device despite the ultrasensitive rotary motors.

  6. Two-qubit quantum cloning machine and quantum correlation broadcasting

    NASA Astrophysics Data System (ADS)

    Kheirollahi, Azam; Mohammadi, Hamidreza; Akhtarshenas, Seyed Javad

    2016-11-01

    Due to the axioms of quantum mechanics, perfect cloning of an unknown quantum state is impossible. But since imperfect cloning is still possible, a question arises: "Is there an optimal quantum cloning machine?" Buzek and Hillery answered this question and constructed their famous B-H quantum cloning machine. The B-H machine clones the state of an arbitrary single qubit in an optimal manner and hence it is universal. Generalizing this machine for a two-qubit system is straightforward, but during this procedure, except for product states, this machine loses its universality and becomes a state-dependent cloning machine. In this paper, we propose some classes of optimal universal local quantum state cloners for a particular class of two-qubit systems, more precisely, for a class of states with known Schmidt basis. We then extend our machine to the case that the Schmidt basis of the input state is deviated from the local computational basis of the machine. We show that more local quantum coherence existing in the input state corresponds to less fidelity between the input and output states. Also we present two classes of a state-dependent local quantum copying machine. Furthermore, we investigate local broadcasting of two aspects of quantum correlations, i.e., quantum entanglement and quantum discord, defined, respectively, within the entanglement-separability paradigm and from an information-theoretic perspective. The results show that although quantum correlation is, in general, very fragile during the broadcasting procedure, quantum discord is broadcasted more robustly than quantum entanglement.

  7. A generalised optimal linear quadratic tracker with universal applications. Part 2: discrete-time systems

    NASA Astrophysics Data System (ADS)

    Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li

    2017-01-01

    Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.

  8. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  9. Biochar Preparation from Simulated Municipal Solid Waste Employing Low Temperature Carbonization Process

    NASA Astrophysics Data System (ADS)

    Areeprasert, C.; Leelachaikul, P.; Jangkobpattana, G.; Phumprasop, K.; Kiattiwat, T.

    2018-02-01

    This paper presents an investigation on carbonization process of simulated municipal solid waste (MSW). Simulated MSW consists of a representative of food residue (68%), plastic waste (20%), paper (8%), and textile (4%). Laboratory-scale carbonization was performed in this study using a vertical-type pyrolyzer varying carbonization temperature (300, 350, 400, and 450 °C) and heating rate (5, 10, 15, and 20 °C/min). Appearance of the biochar product was in black and the volume was significantly reduced. Low carbonization temperature (300 °C) might not completely decompose plastic materials in MSW. Results showed that the carbonization at the temperature of 400 °C with the heating rate of 5 °C/min was the optimal condition. The yield of biochar from the optimal process was 50.6% with the heating value of 26.85 MJ/kg. Energy input of the process was attributed to water evaporation and the decomposition of plastics and paper. Energy output of the process was highest at the optimal condition. Energy output and input ratio was around 1.3-1.7 showing the feasibility of the carbonization process in all heating rate condition.

  10. A Bayesian-based two-stage inexact optimization method for supporting stream water quality management in the Three Gorges Reservoir region.

    PubMed

    Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W

    2016-05-01

    In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.

  11. Renewable sustainable biocatalyzed electricity production in a photosynthetic algal microbial fuel cell (PAMFC).

    PubMed

    Strik, David P B T B; Terlouw, Hilde; Hamelers, Hubertus V M; Buisman, Cees J N

    2008-12-01

    Electricity production via solar energy capturing by living higher plants and microalgae in combination with microbial fuel cells are attractive because these systems promise to generate useful energy in a renewable, sustainable, and efficient manner. This study describes the proof of principle of a photosynthetic algal microbial fuel cell (PAMFC) based on naturally selected algae and electrochemically active microorganisms in an open system and without addition of instable or toxic mediators. The developed solar-powered PAMFC produced continuously over 100 days renewable biocatalyzed electricity. The sustainable performance of the PAMFC resulted in a maximum current density of 539 mA/m2 projected anode surface area and a maximum power production of 110 mW/m2 surface area photobioreactor. The energy recovery of the PAMFC can be increased by optimization of the photobioreactor, by reducing the competition from non-electrochemically active microorganisms, by increasing the electrode surface and establishment of a further-enriched biofilm. Since the objective is to produce net renewable energy with algae, future research should also focus on the development of low energy input PAMFCs. This is because current algae production systems have energy inputs similar to the energy present in the outcoming valuable products.

  12. Dutch national rainfallradar project: a unique corporation

    NASA Astrophysics Data System (ADS)

    Schuurmans, Hanneke; Maarten Verbree, Jan; Leijnse, Hidde; van Heeringen, Klaas-Jan; Uijlenhoet, Remko; Bierkens, Mark; van de Giesen, Nick; Gooijer, Jan; van den Houten, Gert

    2013-04-01

    Since January 2013 Dutch watermanagers have access to innovative high-quality rainfall data. This product is innovative because of the following reasons. (i) The product is developed in a 'golden triangle' construction - corporation between government, business and research institutes. (ii) Second the rainfall products are developed according to the open-source GPL license. The initiative comes from a group of water boards in the Netherlands that joined their forces to fund the development of a new rainfall product. Not only data from Dutch radar stations (as is currently done by the Dutch meteorological organization KNMI) is used but also data from radars in Germany and Belgium. After a radarcomposite is made, it is adjusted according to data from raingauges (ground truth). This results in 9 different rainfall products that give for each moment the best rainfall data. This data will be used, depending on the end-user for several applications: (i) forecasts: input for flood early warning systems, (ii) water system analysis: hydrological model input, (iii) optimization: real time control and (iv) investigation of incidents: in case of flooding, who's responsible. The latter is mainly insight in the return period of heavy rainfall events. More info (in Dutch): www.nationaleregenradar.nl

  13. Shuttle cryogenic supply system. Optimization study. Volume 5 B-1: Programmers manual for math models

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A computer program for rapid parametric evaluation of various types of cryogenics spacecraft systems is presented. The mathematical techniques of the program provide the capability for in-depth analysis combined with rapid problem solution for the production of a large quantity of soundly based trade-study data. The program requires a large data bank capable of providing characteristics performance data for a wide variety of component assemblies used in cryogenic systems. The program data requirements are divided into: (1) the semipermanent data tables and source data for performance characteristics and (2) the variable input data which contains input parameters which may be perturbated for parametric system studies.

  14. Mechanical System Analysis/Design Tool (MSAT) Quick Guide

    NASA Technical Reports Server (NTRS)

    Lee, HauHua; Kolb, Mark; Madelone, Jack

    1998-01-01

    MSAT is a unique multi-component multi-disciplinary tool that organizes design analysis tasks around object-oriented representations of configuration components, analysis programs and modules, and data transfer links between them. This creative modular architecture enables rapid generation of input stream for trade-off studies of various engine configurations. The data transfer links automatically transport output from one application as relevant input to the next application once the sequence is set up by the user. The computations are managed via constraint propagation - the constraints supplied by the user as part of any optimization module. The software can be used in the preliminary design stage as well as during the detail design of product development process.

  15. Optimization of autoregressive, exogenous inputs-based typhoon inundation forecasting models using a multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ouyang, Huei-Tau

    2017-07-01

    Three types of model for forecasting inundation levels during typhoons were optimized: the linear autoregressive model with exogenous inputs (LARX), the nonlinear autoregressive model with exogenous inputs with wavelet function (NLARX-W) and the nonlinear autoregressive model with exogenous inputs with sigmoid function (NLARX-S). The forecast performance was evaluated by three indices: coefficient of efficiency, error in peak water level and relative time shift. Historical typhoon data were used to establish water-level forecasting models that satisfy all three objectives. A multi-objective genetic algorithm was employed to search for the Pareto-optimal model set that satisfies all three objectives and select the ideal models for the three indices. Findings showed that the optimized nonlinear models (NLARX-W and NLARX-S) outperformed the linear model (LARX). Among the nonlinear models, the optimized NLARX-W model achieved a more balanced performance on the three indices than the NLARX-S models and is recommended for inundation forecasting during typhoons.

  16. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  17. Polarimeter Blind Deconvolution Using Image Diversity

    DTIC Science & Technology

    2007-09-01

    significant presence when imaging through turbulence and its ease of production in the labora- tory. An innovative algorithm for detection and estimation...1.2.2.2 Atmospheric Turbulence . Atmospheric turbulence spatially distorts the wavefront as light passes through it and causes blurring of images in an...intensity image . Various values of β are used in the experiments. The optimal β value varied with the input and the algorithm . The hybrid seemed to

  18. Dynamic optimization of CELSS crop photosynthetic rate by computer-assisted feedback control

    NASA Astrophysics Data System (ADS)

    Chun, C.; Mitchell, C. A.

    1997-01-01

    A procedure for dynamic optimization of net photosynthetic rate (Pn) for crop production in Controlled Ecological Life-Support Systems (CELSS) was developed using leaf lettuce as a model crop. Canopy Pn was measured in real time and fed back for environmental control. Setpoints of photosynthetic photon flux (PPF) and CO_2 concentration for each hour of the crop-growth cycle were decided by computer to reach a targeted Pn each day. Decision making was based on empirical mathematical models combined with rule sets developed from recent experimental data. Comparisons showed that dynamic control resulted in better yield per unit energy input to the growth system than did static control. With comparable productivity parameters and potential for significant energy savings, dynamic control strategies will contribute greatly to the sustainability of space-deployed CELSS.

  19. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  20. Steep, cheap and deep: an ideotype to optimize water and N acquisition by maize root systems.

    PubMed

    Lynch, Jonathan P

    2013-07-01

    A hypothetical ideotype is presented to optimize water and N acquisition by maize root systems. The overall premise is that soil resource acquisition is optimized by the coincidence of root foraging and resource availability in time and space. Since water and nitrate enter deeper soil strata over time and are initially depleted in surface soil strata, root systems with rapid exploitation of deep soil would optimize water and N capture in most maize production environments. • THE IDEOTYPE: Specific phenes that may contribute to rooting depth in maize include (a) a large diameter primary root with few but long laterals and tolerance of cold soil temperatures, (b) many seminal roots with shallow growth angles, small diameter, many laterals, and long root hairs, or as an alternative, an intermediate number of seminal roots with steep growth angles, large diameter, and few laterals coupled with abundant lateral branching of the initial crown roots, (c) an intermediate number of crown roots with steep growth angles, and few but long laterals, (d) one whorl of brace roots of high occupancy, having a growth angle that is slightly shallower than the growth angle for crown roots, with few but long laterals, (e) low cortical respiratory burden created by abundant cortical aerenchyma, large cortical cell size, an optimal number of cells per cortical file, and accelerated cortical senescence, (f) unresponsiveness of lateral branching to localized resource availability, and (g) low K(m) and high Vmax for nitrate uptake. Some elements of this ideotype have experimental support, others are hypothetical. Despite differences in N distribution between low-input and commercial maize production, this ideotype is applicable to low-input systems because of the importance of deep rooting for water acquisition. Many features of this ideotype are relevant to other cereal root systems and more generally to root systems of dicotyledonous crops.

  1. Building agribusiness model of LEISA to achieve sustainable agriculture in Surian Subdistrict of Sumedang Regency West Java Indonesia

    NASA Astrophysics Data System (ADS)

    Djuwendah, E.; Priyatna, T.; Kusno, K.; Deliana, Y.; Wulandari, E.

    2018-03-01

    Building agribusiness model of LEISA is needed as a prototype of sustainable regional and economic development (SRRED) in the watersheds (DAS) of West Java Province. Agribusiness model of LEISA is a sustainable agribusiness system applying low external input. The system was developed in the framework of optimizing local-based productive resources including soil, water, vegetation, microclimate, renewable energy, appropriate technology, social capital, environment and human resources by combining various subsystems including integrated production subsystems of crops, livestock and fish to provide a maximum synergy effect, post-harvest subsystem and processing of results, marketing subsystems and supporting subsystems. In this study, the ecological boundary of Cipunegara sub-watershed ecosystem, administrative boundaries are Surian Subdistricts in Sumedang. The purpose of this study are to identify the potency of natural resources and local agricultural technologies that could support the LEISA model in Surian and to identify the potency of internal and external inputs in the LEISA model. The research used qualitative descriptive method and technical action research. Data were obtained through interviews, documentation, and observation. The results showed that natural resources in the form of agricultural land, water resources, livestock resources, and human labor are sufficient to support agribusiness model of LEISA. LEISA agribusiness model that has been applied in the research location is the integration of beef cattle, agroforestry, and agrosilvopasture. By building LEISA model, agribusiness can optimize the utilization of locally based productive resources, reduce dependence on external resources, and support sustainable food security.

  2. Time cycle analysis and simulation of material flow in MOX process layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Saraswat, A.; Danny, K.M.

    The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the helpmore » of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.« less

  3. A neuro-data envelopment analysis approach for optimization of uncorrelated multiple response problems with smaller the better type controllable factors

    NASA Astrophysics Data System (ADS)

    Bashiri, Mahdi; Farshbaf-Geranmayeh, Amir; Mogouie, Hamed

    2013-11-01

    In this paper, a new method is proposed to optimize a multi-response optimization problem based on the Taguchi method for the processes where controllable factors are the smaller-the-better (STB)-type variables and the analyzer desires to find an optimal solution with smaller amount of controllable factors. In such processes, the overall output quality of the product should be maximized while the usage of the process inputs, the controllable factors, should be minimized. Since all possible combinations of factors' levels, are not considered in the Taguchi method, the response values of the possible unpracticed treatments are estimated using the artificial neural network (ANN). The neural network is tuned by the central composite design (CCD) and the genetic algorithm (GA). Then data envelopment analysis (DEA) is applied for determining the efficiency of each treatment. Although the important issue for implementation of DEA is its philosophy, which is maximization of outputs versus minimization of inputs, this important issue has been neglected in previous similar studies in multi-response problems. Finally, the most efficient treatment is determined using the maximin weight model approach. The performance of the proposed method is verified in a plastic molding process. Moreover a sensitivity analysis has been done by an efficiency estimator neural network. The results show efficiency of the proposed approach.

  4. Computer model for refinery operations with emphasis on jet fuel production. Volume 2: Data and technical bases

    NASA Technical Reports Server (NTRS)

    Dunbar, D. N.; Tunnah, B. G.

    1978-01-01

    The FORTRAN computing program predicts the flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuel of varying end point and hydrogen content specifications. The program has provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case. The report has sufficient detail for the information of most readers.

  5. Nonlinear system modeling based on bilinear Laguerre orthonormal bases.

    PubMed

    Garna, Tarek; Bouzrara, Kais; Ragot, José; Messaoud, Hassani

    2013-05-01

    This paper proposes a new representation of discrete bilinear model by developing its coefficients associated to the input, to the output and to the crossed product on three independent Laguerre orthonormal bases. Compared to classical bilinear model, the resulting model entitled bilinear-Laguerre model ensures a significant parameter number reduction as well as simple recursive representation. However, such reduction still constrained by an optimal choice of Laguerre pole characterizing each basis. To do so, we develop a pole optimization algorithm which constitutes an extension of that proposed by Tanguy et al.. The bilinear-Laguerre model as well as the proposed pole optimization algorithm are illustrated and tested on a numerical simulations and validated on the Continuous Stirred Tank Reactor (CSTR) System. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  6. On Fusing Recursive Traversals of K-d Trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less

  7. Application of artificial intelligent tools to modeling of glucosamine preparation from exoskeleton of shrimp.

    PubMed

    Valizadeh, Hadi; Pourmahmood, Mohammad; Mojarrad, Javid Shahbazi; Nemati, Mahboob; Zakeri-Milani, Parvin

    2009-04-01

    The objective of this study was to forecast and optimize the glucosamine production yield from chitin (obtained from Persian Gulf shrimp) by means of genetic algorithm (GA), particle swarm optimization (PSO), and artificial neural networks (ANNs) as tools of artificial intelligence methods. Three factors (acid concentration, acid solution to chitin ratio, and reaction time) were used as the input parameters of the models investigated. According to the obtained results, the production yield of glucosamine hydrochloride depends linearly on acid concentration, acid solution to solid ratio, and time and also the cross-product of acid concentration and time and the cross-product of solids to acid solution ratio and time. The production yield significantly increased with an increase of acid concentration, acid solution ratio, and reaction time. The production yield is inversely related to the cross-product of acid concentration and time. It means that at high acid concentrations, the longer reaction times give lower production yields. The results revealed that the average percent error (PE) for prediction of production yield by GA, PSO, and ANN are 6.84, 7.11, and 5.49%, respectively. Considering the low PE, it might be concluded that these models have a good predictive power in the studied range of variables and they have the ability of generalization to unknown cases.

  8. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  9. Choosing the right amount of healthcare information technologies investments.

    PubMed

    Meyer, Rodolphe; Degoulet, Patrice

    2010-04-01

    Choosing and justifying the right amount of investment in healthcare information technologies (HITECH or HIT) in hospitals is an ever increasing challenge. Our objectives are to assess the financial impact of HIT on hospital outcome, and propose decision-helping tools that could be used to rationalize the distribution of hospital finances. We used a production function and microeconomic tools on data of 21 Paris university hospitals recorded from 1998 to 2006 to compute the elasticity coefficients of HIT versus non-HIT capital and labor as regards to hospital financial outcome and optimize the distribution of investments according to the productivity associated with each input. HIT inputs and non-HIT inputs both have a positive and significant impact on hospital production (elasticity coefficients respectively of 0.106 and 0.893; R(2) of 0.92). We forecast 2006 results from the 1998 to 2005 dataset with an accuracy of +0.61%. With the model used, the best proportion of HIT investments was estimated to be 10.6% of total input and this was predicted to lead to a total saving of 388 million Euros for the 2006 dataset. Considering HIT investment from the point of view of a global portfolio and applying econometric and microeconomic tools allow the required confidence level to be attained for choosing the right amount of HIT investments. It could also allow hospitals using these tools to make substantial savings, and help them forecast their choices for the following year for better HITECH governance in the current stimulation context. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Fiber coupled diode laser beam parameter product calculation and rules for optimized design

    NASA Astrophysics Data System (ADS)

    Wang, Zuolan; Segref, Armin; Koenning, Tobias; Pandey, Rajiv

    2011-03-01

    The Beam Parameter Product (BPP) of a passive, lossless system is a constant and cannot be improved upon but the beams may be reshaped for enhanced coupling performance. The function of the optical designer of fiber coupled diode lasers is to preserve the brightness of the diode sources while maximizing the coupling efficiency. In coupling diode laser power into fiber output, the symmetrical geometry of the fiber core makes it highly desirable to have symmetrical BPPs at the fiber input surface, but this is not always practical. It is therefore desirable to be able to know the 'diagonal' (fiber) BPP, using the BPPs of the fast and slow axes, before detailed design and simulation processes. A commonly used expression for this purpose, i.e. the square root of the sum of the squares of the BPPs in the fast and slow axes, has been found to consistently under-predict the fiber BPP (i.e. better beam quality is predicted than is actually achievable in practice). In this paper, using a simplified model, we provide the proof of the proper calculation of the diagonal (i.e. the fiber) BPP using BPPs of the fast and slow axes as input. Using the same simplified model, we also offer the proof that the fiber BPP can be shown to have a minimum (optimal) value for given diode BPPs and this optimized condition can be obtained before any detailed design and simulation are carried out. Measured and simulated data confirms satisfactory correlation between the BPPs of the diode and the predicted fiber BPP.

  11. Galileo - The Serial-Production AIT Challenge

    NASA Technical Reports Server (NTRS)

    Ragnit, Ulrike; Brunner, Otto

    2008-01-01

    The Galileo Project is one of the most demanding projects of ESA, being Europe's autarkic navigation system and a constellation composed of 30 satellites. This presentation points out the different phases of the project up to the full operational capability and the corresponding launch options with respect to launch vehicles as well as launch configurations. One of the biggest challenges is to set up a small serial 'production line' for the overall integration and test campaign of satellites. This production line demands an optimization of all relevant tasks, taking into account also backup and recovery actions. A comprehensive AIT concept is required, reflecting a tightly merged facility layout and work flow design. In addition a common data management system is needed to handle all spacecraft related documentation and to have a direct input-out flow for all activities, phases and positions at the same time. Process optimization is a well known field of engineering in all small high tech production lines, nevertheless serial production of satellites are still not the daily task in space business and therefore new concepts have to be put in place. Therefore, and in order to meet the satellites overall system optimization, a thorough interface between unit/subsystem manufacturing and satellite AIT must be realized to ensure a smooth flow and to avoid any process interruption, which would directly lead to a schedule impact.

  12. Performance Optimizing Multi-Objective Adaptive Control with Time-Varying Model Reference Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.

  13. Evaluating the Usefulness of High-Temporal Resolution Vegetation Indices to Identify Crop Types

    NASA Astrophysics Data System (ADS)

    Hilbert, K.; Lewis, D.; O'Hara, C. G.

    2006-12-01

    The National Aeronautical and Space Agency (NASA) and the United States Department of Agriculture (USDA) jointly sponsored research covering the 2004 to 2006 South American crop seasons that focused on developing methods for the USDA's Foreign Agricultural Service's (FAS) Production Estimates and Crop Assessment Division (PECAD) to identify crop types using MODIS-derived, hyper-temporal Normalized Difference Vegetation Index (NDVI) images. NDVI images were composited in 8 day intervals from daily NDVI images and aggregated to create a hyper-termporal NDVI layerstack. This NDVI layerstack was used as input to image classification algorithms. Research results indicated that creating high-temporal resolution Normalized Difference Vegetation Index (NDVI) composites from NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) data products provides useful input to crop type classifications as well as potential useful input for regional crop productivity modeling efforts. A current NASA-sponsored Rapid Prototyping Capability (RPC) experiment will assess the utility of simulated future Visible Infrared Imager / Radiometer Suite (VIIRS) imagery for conducting NDVI-derived land cover and specific crop type classifications. In the experiment, methods will be considered to refine current MODIS data streams, reduce the noise content of the MODIS, and utilize the MODIS data as an input to the VIIRS simulation process. The effort also is being conducted in concert with an ISS project that will further evaluate, verify and validate the usefulness of specific data products to provide remote sensing-derived input for the Sinclair Model a semi-mechanistic model for estimating crop yield. The study area encompasses a large portion of the Pampas region of Argentina--a major world producer of crops such as corn, soybeans, and wheat which makes it a competitor to the US. ITD partnered with researchers at the Center for Surveying Agricultural and Natural Resources (CREAN) of the National University of Cordoba, Argentina, and CREAN personnel collected and continue to collect field-level, GIS-based in situ information. Current efforts involve both developing and optimizing software tools for the necessary data processing. The software includes the Time Series Product Tool (TSPT), Leica's ERDAS Imagine, and Mississippi State University's Temporal Map Algebra computational tools.

  14. Optimization of cell seeding in a 2D bio-scaffold system using computational models.

    PubMed

    Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong

    2017-05-01

    The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Tracking historical increases in nitrogen-driven crop production possibilities

    NASA Astrophysics Data System (ADS)

    Mueller, N. D.; Lassaletta, L.; Billen, G.; Garnier, J.; Gerber, J. S.

    2015-12-01

    The environmental costs of nitrogen use have prompted a focus on improving the efficiency of nitrogen use in the global food system, the primary source of nitrogen pollution. Typical approaches to improving agricultural nitrogen use efficiency include more targeted field-level use (timing, placement, and rate) and modification of the crop mix. However, global efficiency gains can also be achieved by improving the spatial allocation of nitrogen between regions or countries, due to consistent diminishing returns at high nitrogen use. This concept is examined by constructing a tradeoff frontier (or production possibilities frontier) describing global crop protein yield as a function of applied nitrogen from all sources, given optimal spatial allocation. Yearly variation in country-level input-output nitrogen budgets are utilized to parameterize country-specific hyperbolic yield-response models. Response functions are further characterized for three ~15-year eras beginning in 1961, and series of calculations uses these curves to simulate optimal spatial allocation in each era and determine the frontier. The analyses reveal that excess nitrogen (in recent years) could be reduced by ~40% given optimal spatial allocation. Over time, we find that gains in yield potential and in-country nitrogen use efficiency have led to increases in the global nitrogen production possibilities frontier. However, this promising shift has been accompanied by an actual spatial distribution of nitrogen use that has become less optimal, in an absolute sense, relative to the frontier. We conclude that examination of global production possibilities is a promising approach to understanding production constraints and efficiency opportunities in the global food system.

  16. Optimal Output Trajectory Redesign for Invertible Systems

    NASA Technical Reports Server (NTRS)

    Devasia, S.

    1996-01-01

    Given a desired output trajectory, inversion-based techniques find input-state trajectories required to exactly track the output. These inversion-based techniques have been successfully applied to the endpoint tracking control of multijoint flexible manipulators and to aircraft control. The specified output trajectory uniquely determines the required input and state trajectories that are found through inversion. These input-state trajectories exactly track the desired output; however, they might not meet acceptable performance requirements. For example, during slewing maneuvers of flexible structures, the structural deformations, which depend on the required state trajectories, may be unacceptably large. Further, the required inputs might cause actuator saturation during an exact tracking maneuver, for example, in the flight control of conventional takeoff and landing aircraft. In such situations, a compromise is desired between the tracking requirement and other goals such as reduction of internal vibrations and prevention of actuator saturation; the desired output trajectory needs to redesigned. Here, we pose the trajectory redesign problem as an optimization of a general quadratic cost function and solve it in the context of linear systems. The solution is obtained as an off-line prefilter of the desired output trajectory. An advantage of our technique is that the prefilter is independent of the particular trajectory. The prefilter can therefore be precomputed, which is a major advantage over other optimization approaches. Previous works have addressed the issue of preshaping inputs to minimize residual and in-maneuver vibrations for flexible structures; Since the command preshaping is computed off-line. Further minimization of optimal quadratic cost functions has also been previously use to preshape command inputs for disturbance rejection. All of these approaches are applicable when the inputs to the system are known a priori. Typically, outputs (not inputs) are specified in tracking problems, and hence the input trajectories have to be computed. The inputs to the system are however, difficult to determine for non-minimum phase systems like flexible structures. One approach to solve this problem is to (1) choose a tracking controller (the desired output trajectory is now an input to the closed-loop system and (2) redesign this input to the closed-loop system. Thus we effectively perform output redesign. These redesigns are however, dependent on the choice of the tracking controllers. Thus the controller optimization and trajectory redesign problems become coupled; this coupled optimization is still an open problem. In contrast, we decouple the trajectory redesign problem from the choice of feedback-based tracking controller. It is noted that our approach remains valid when a particular tracking controller is chosen. In addition, the formulation of our problem not only allows for the minimization of residual vibration as in available techniques but also allows for the optimal reduction fo vibrations during the maneuver, e.g., the altitude control of flexible spacecraft. We begin by formulating the optimal output trajectory redesign problem and then solve it in the context of general linear systems. This theory is then applied to an example flexible structure, and simulation results are provided.

  17. RET selection on state-of-the-art NAND flash

    NASA Astrophysics Data System (ADS)

    Lafferty, Neal V.; He, Yuan; Pei, Jinhua; Shao, Feng; Liu, QingWei; Shi, Xuelong

    2015-03-01

    We present results generated using a new gauge-based Resolution Enhancement Technique (RET) Selection flow during the technology set up phase of a 3x-node NAND Flash product. As a testcase, we consider a challenging critical level for this ash product. The RET solutions include inverse lithography technology (ILT) optimized masks with sub-resolution assist features (SRAF) and companion illumination sources developed using a new pixel based Source Mask Optimization (SMO) tool that uses measurement gauges as a primary input. The flow includes verification objectives which allow tolerancing of particular measurement gauges based on lithographic criteria. Relative importance for particular gauges may also be set, to aid in down-selection from several candidate sources. The end result is a sensitive, objective score of RET performance. Using these custom-defined importance metrics, decisions on the final RET style can be made in an objective way.

  18. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  19. Noise in Charge Amplifiers— A gm/ID Approach

    NASA Astrophysics Data System (ADS)

    Alvarez, Enrique; Avila, Diego; Campillo, Hernan; Dragone, Angelo; Abusleme, Angel

    2012-10-01

    Charge amplifiers represent the standard solution to amplify signals from capacitive detectors in high energy physics experiments. In a typical front-end, the noise due to the charge amplifier, and particularly from its input transistor, limits the achievable resolution. The classic approach to attenuate noise effects in MOSFET charge amplifiers is to use the maximum power available, to use a minimum-length input device, and to establish the input transistor width in order to achieve the optimal capacitive matching at the input node. These conclusions, reached by analysis based on simple noise models, lead to sub-optimal results. In this work, a new approach on noise analysis for charge amplifiers based on an extension of the gm/ID methodology is presented. This method combines circuit equations and results from SPICE simulations, both valid for all operation regions and including all noise sources. The method, which allows to find the optimal operation point of the charge amplifier input device for maximum resolution, shows that the minimum device length is not necessarily the optimal, that flicker noise is responsible for the non-monotonic noise versus current function, and provides a deeper insight on the noise limits mechanism from an alternative and more design-oriented point of view.

  20. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    PubMed

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  1. An engineering optimization method with application to STOL-aircraft approach and landing trajectories

    NASA Technical Reports Server (NTRS)

    Jacob, H. G.

    1972-01-01

    An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.

  2. Optimum design of hybrid phase locked loops

    NASA Technical Reports Server (NTRS)

    Lee, P.; Yan, T.

    1981-01-01

    The design procedure of phase locked loops is described in which the analog loop filter is replaced by a digital computer. Specific design curves are given for the step and ramp input changes in phase. It is shown that the designed digital filter depends explicitly on the product of the sampling time and the noise bandwidth of the phase locked loop. This technique of optimization can be applied to the design of digital analog loops for other applications.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Ren, Huaqing; Lu, Jie

    This paper reports several characterization methods of the properties of the uncured woven prepreg during the preforming process. The uniaxial tension, bias-extension, and bending tests are conducted to measure the in-plane properties of the material. The friction tests utilized to reveal the prepreg-prepreg and prepreg-forming tool interactions. All these tests are performed within the temperature range of the real manufacturing process. The results serve as the inputs to the numerical simulation for the product prediction and preforming process parameter optimization.

  4. Hearing AIDS and music.

    PubMed

    Chasin, Marshall; Russo, Frank A

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.

  5. Implications of Preference and Problem Formulation on the Operating Policies of Complex Multi-Reservoir Systems

    NASA Astrophysics Data System (ADS)

    Quinn, J.; Reed, P. M.; Giuliani, M.; Castelletti, A.

    2016-12-01

    Optimizing the operations of multi-reservoir systems poses several challenges: 1) the high dimension of the problem's states and controls, 2) the need to balance conflicting multi-sector objectives, and 3) understanding how uncertainties impact system performance. These difficulties motivated the development of the Evolutionary Multi-Objective Direct Policy Search (EMODPS) framework, in which multi-reservoir operating policies are parameterized in a given family of functions and then optimized for multiple objectives through simulation over a set of stochastic inputs. However, properly framing these objectives remains a severe challenge and a neglected source of uncertainty. Here, we use EMODPS to optimize operating policies for a 4-reservoir system in the Red River Basin in Vietnam, exploring the consequences of optimizing to different sets of objectives related to 1) hydropower production, 2) meeting multi-sector water demands, and 3) providing flood protection to the capital city of Hanoi. We show how coordinated operation of the reservoirs can differ markedly depending on how decision makers weigh these concerns. Moreover, we illustrate how formulation choices that emphasize the mean, tail, or variability of performance across objective combinations must be evaluated carefully. Our results show that these choices can significantly improve attainable system performance, or yield severe unintended consequences. Finally, we show that satisfactory validation of the operating policies on a set of out-of-sample stochastic inputs depends as much or more on the formulation of the objectives as on effective optimization of the policies. These observations highlight the importance of carefully considering how we abstract stakeholders' objectives and of iteratively optimizing and visualizing multiple problem formulation hypotheses to ensure that we capture the most important tradeoffs that emerge from different stakeholder preferences.

  6. Benefit-Risk Assessment, Communication, and Evaluation (BRACE) throughout the life cycle of therapeutic products: overall perspective and role of the pharmacoepidemiologist.

    PubMed

    Radawski, Christine; Morrato, Elaine; Hornbuckle, Kenneth; Bahri, Priya; Smith, Meredith; Juhaeri, Juhaeri; Mol, Peter; Levitan, Bennett; Huang, Han-Yao; Coplan, Paul; Li, Hu

    2015-12-01

    Optimizing a therapeutic product's benefit-risk profile is an on-going process throughout the product's life cycle. Different, yet related, benefit-risk assessment strategies and frameworks are being developed by various regulatory agencies, industry groups, and stakeholders. This paper summarizes current best practices and discusses the role of the pharmacoepidemiologist in these activities, taking a life-cycle approach to integrated Benefit-Risk Assessment, Communication, and Evaluation (BRACE). A review of the medical and regulatory literature was performed for the following steps involved in therapeutic benefit-risk optimization: benefit-risk evidence generation; data integration and analysis; decision making; regulatory and policy decision making; benefit-risk communication and risk minimization; and evaluation. Feedback from International Society for Pharmacoepidemiology members was solicited on the role of the pharmacoepidemiologist. The case example of natalizumab is provided to illustrate the cyclic nature of the benefit-risk optimization process. No single, globally adopted benefit-risk assessment process exists. The BRACE heuristic offers a way to clarify research needs and to promote best practices in a cyclic and integrated manner and highlight the critical importance of cross-disciplinary input. Its approach focuses on the integration of BRACE activities for risk minimization and optimization of the benefit-risk profile. The activities defined in the BRACE heuristic contribute to the optimization of the benefit-risk profile of therapeutic products in the clinical world at both the patient and population health level. With interdisciplinary collaboration, pharmacoepidemiologists are well suited for bringing in methodology expertise, relevant research, and public health perspectives into the BRACE process. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Directional hearing by linear summation of binaural inputs at the medial superior olive

    PubMed Central

    van der Heijden, Marcel; Lorteije, Jeannette A. M.; Plauška, Andrius; Roberts, Michael T.; Golding, Nace L.; Borst, J. Gerard G.

    2013-01-01

    SUMMARY Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own “best ITD” to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. PMID:23764292

  8. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.

    PubMed

    Kiumarsi, Bahare; Lewis, Frank L

    2015-01-01

    This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method.

  9. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  10. Processor design optimization methodology for synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.

    1997-06-01

    Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.

  11. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  12. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  13. Thermal/structural Tailoring of Engine Blades (T/STAEBL) User's Manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.; Clevenger, W. B.; Arel, J. D.

    1994-01-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual contains an overview of the system, fundamentals of the data block structure, and detailed descriptions of the inputs required by the optimizer. Additionally, the thermal analysis input requirements are described as well as the inputs required to perform a finite element blade vibrations analysis.

  14. Tailoring Microbial Electrochemical Cells for Production of Hydrogen Peroxide at High Concentrations and Efficiencies.

    PubMed

    Young, Michelle N; Links, Mikaela J; Popat, Sudeep C; Rittmann, Bruce E; Torres, César I

    2016-12-08

    A microbial peroxide producing cell (MPPC) for H 2 O 2 production at the cathode was systematically optimized with minimal energy input. First, the stability of H 2 O 2 was evaluated using different catholytes, membranes, and catalyst materials. On the basis of these results, a flat-plate MPPC fed continuously using 200 mm NaCl catholyte at a 4 h hydraulic retention time was designed and operated, producing H 2 O 2 for 18 days. H 2 O 2 concentration of 3.1 g L -1 H 2 O 2 with 1.1 Wh g -1 H 2 O 2 power input was achieved in the MPPC. The high H 2 O 2 concentration was a result of the optimum materials selected. The small energy input was largely the result of the 0.5 cm distance between the anode and cathode, which reduced ionic transport losses. However, >50 % of operational overpotentials were due to the 4.5-5 pH unit difference between the anode and cathode chambers. The results demonstrate that a MPPC can continuously produce H 2 O 2 at high concentration by selecting compatible materials and appropriate operating conditions. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Design and numerical evaluation of full-authority flight control systems for conventional and thruster-augmented helicopters employed in NOE operations

    NASA Technical Reports Server (NTRS)

    Perri, Todd A.; Mckillip, R. M., Jr.; Curtiss, H. C., Jr.

    1987-01-01

    The development and methodology is presented for development of full-authority implicit model-following and explicit model-following optimal controllers for use on helicopters operating in the Nap-of-the Earth (NOE) environment. Pole placement, input-output frequency response, and step input response were used to evaluate handling qualities performance. The pilot was equipped with velocity-command inputs. A mathematical/computational trajectory optimization method was employed to evaluate the ability of each controller to fly NOE maneuvers. The method determines the optimal swashplate and thruster input histories from the helicopter's dynamics and the prescribed geometry and desired flying qualities of the maneuver. Three maneuvers were investigated for both the implicit and explicit controllers with and without auxiliary propulsion installed: pop-up/dash/descent, bob-up at 40 knots, and glideslope. The explicit controller proved to be superior to the implicit controller in performance and ease of design.

  16. Datasets for supplier selection and order allocation with green criteria, all-unit quantity discounts and varying number of suppliers.

    PubMed

    Hamdan, Sadeque; Cheaitou, Ali

    2017-08-01

    This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.

  17. Nitrogen input inventory in the Nooksack-Abbotsford-Sumas ...

    EPA Pesticide Factsheets

    Background/Question/Methods: Nitrogen (N) is an essential biological element, so optimizing N use for food production while minimizing the release of N and co-pollutants to the environment is an important challenge. The Nooksack-lower Fraser Valley, spanning a portion of the western interface of British Columbia, Washington state, and the Lummi Nation and the Nooksack Tribe, supports agriculture, fisheries, diverse wildlife, and vibrant urban areas. Groundwater nitrate contamination affects thousands of households in this region. Fisheries and air quality are also affected including periodic closures of shellfish harvest. To reduce the release of N to the environment, successful approaches are needed that partner all stakeholders with appropriate institutions to integrate science, outreach and management efforts. Our goal is to determine the distribution and quantities of N inventories of the watershed. This work synthesizes publicly available data on N sources including deposition, sewage and septic inputs, fertilizer and manure applications, marine-derived N from salmon, and more. The information on cross-boundary N inputs to the landscape will be coupled with stream monitoring data and existing knowledge about N inputs and exports from the watershed to estimate the N residual and inform N management in the search for the environmentally and economically viable and effective solutions. Results/Conclusions: We will estimate the N inputs into the Nooks

  18. Nitrogen input inventory in the Nooksack-Abbotsford-Sumas ...

    EPA Pesticide Factsheets

    Nitrogen (N) is an essential biological element, so optimizing N use for food production while minimizing the release of N and co-pollutants to the environment is an important challenge. The Nooksack-Abbotsford-Sumas Transboundary (NAS) Region, spanning a portion of the western interface of British Columbia, Washington state, and the Lummi Nation and the Nooksack Tribe, supports agriculture, fisheries, diverse wildlife, and vibrant urban areas. Groundwater nitrate contamination affects thousands of households in this region. Fisheries and air quality are also affected including periodic closures of shellfish harvest. To reduce the release of N to the environment, successful approaches are needed that partner all stakeholders with appropriate institutions to integrate science, outreach and management efforts. Our goal is to determine the distribution and quantities of N inventories of the watershed. This work synthesizes publicly available data on N sources including deposition, sewage and septic inputs, fertilizer and manure applications, marine-derived N from salmon, and more. The information on cross-boundary N inputs to the landscape will be coupled with stream monitoring data and existing knowledge about N inputs and exports from the watershed to estimate the N residual and inform N management in the search for the environmentally and economically viable and effective solutions. We will estimate the N inputs into the NAS region and transfers within

  19. Predicting the Fine Particle Fraction of Dry Powder Inhalers Using Artificial Neural Networks.

    PubMed

    Muddle, Joanna; Kirton, Stewart B; Parisini, Irene; Muddle, Andrew; Murnane, Darragh; Ali, Jogoth; Brown, Marc; Page, Clive; Forbes, Ben

    2017-01-01

    Dry powder inhalers are increasingly popular for delivering drugs to the lungs for the treatment of respiratory diseases, but are complex products with multivariate performance determinants. Heuristic product development guided by in vitro aerosol performance testing is a costly and time-consuming process. This study investigated the feasibility of using artificial neural networks (ANNs) to predict fine particle fraction (FPF) based on formulation device variables. Thirty-one ANN architectures were evaluated for their ability to predict experimentally determined FPF for a self-consistent dataset containing salmeterol xinafoate and salbutamol sulfate dry powder inhalers (237 experimental observations). Principal component analysis was used to identify inputs that significantly affected FPF. Orthogonal arrays (OAs) were used to design ANN architectures, optimized using the Taguchi method. The primary OA ANN r 2 values ranged between 0.46 and 0.90 and the secondary OA increased the r 2  values (0.53-0.93). The optimum ANN (9-4-1 architecture, average r 2 0.92 ± 0.02) included active pharmaceutical ingredient, formulation, and device inputs identified by principal component analysis, which reflected the recognized importance and interdependency of these factors for orally inhaled product performance. The Taguchi method was effective at identifying successful architecture with the potential for development as a useful generic inhaler ANN model, although this would require much larger datasets and more variable inputs. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  20. Hearing Aids and Music

    PubMed Central

    Chasin, Marshall; Russo, Frank A.

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues—both compression ratio and knee-points—and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a “music program,” unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions. PMID:15497032

  1. Global agricultural intensification during climate change: a role for genomics.

    PubMed

    Abberton, Michael; Batley, Jacqueline; Bentley, Alison; Bryant, John; Cai, Hongwei; Cockram, James; de Oliveira, Antonio Costa; Cseke, Leland J; Dempewolf, Hannes; De Pace, Ciro; Edwards, David; Gepts, Paul; Greenland, Andy; Hall, Anthony E; Henry, Robert; Hori, Kiyosumi; Howe, Glenn Thomas; Hughes, Stephen; Humphreys, Mike; Lightfoot, David; Marshall, Athole; Mayes, Sean; Nguyen, Henry T; Ogbonnaya, Francis C; Ortiz, Rodomiro; Paterson, Andrew H; Tuberosa, Roberto; Valliyodan, Babu; Varshney, Rajeev K; Yano, Masahiro

    2016-04-01

    Agriculture is now facing the 'perfect storm' of climate change, increasing costs of fertilizer and rising food demands from a larger and wealthier human population. These factors point to a global food deficit unless the efficiency and resilience of crop production is increased. The intensification of agriculture has focused on improving production under optimized conditions, with significant agronomic inputs. Furthermore, the intensive cultivation of a limited number of crops has drastically narrowed the number of plant species humans rely on. A new agricultural paradigm is required, reducing dependence on high inputs and increasing crop diversity, yield stability and environmental resilience. Genomics offers unprecedented opportunities to increase crop yield, quality and stability of production through advanced breeding strategies, enhancing the resilience of major crops to climate variability, and increasing the productivity and range of minor crops to diversify the food supply. Here we review the state of the art of genomic-assisted breeding for the most important staples that feed the world, and how to use and adapt such genomic tools to accelerate development of both major and minor crops with desired traits that enhance adaptation to, or mitigate the effects of climate change. © 2015 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  2. The role of size of input box, location of input box, input method and display size in Chinese handwriting performance and preference on mobile devices.

    PubMed

    Chen, Zhe; Rau, Pei-Luen Patrick

    2017-03-01

    This study presented two experiments on Chinese handwriting performance (time, accuracy, the number of protruding strokes and number of rewritings) and subjective ratings (mental workload, satisfaction, and preference) on mobile devices. Experiment 1 evaluated the effects of size of the input box, input method and display size on Chinese handwriting performance and preference. It was indicated that the optimal input sizes were 30.8 × 30.8 mm, 46.6 × 46.6 mm, 58.9 × 58.9 mm and 84.6 × 84.6 mm for devices with 3.5-inch, 5.5-inch, 7.0-inch and 9.7-inch display sizes, respectively. Experiment 2 proved the significant effects of location of the input box, input method and display size on Chinese handwriting performance and subjective ratings. It was suggested that the optimal location was central regardless of display size and input method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Multiple objective optimization in reliability demonstration test

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela; Li, Mingyang

    2016-10-01

    Reliability demonstration tests are usually performed in product design or validation processes to demonstrate whether a product meets specified requirements on reliability. For binomial demonstration tests, the zero-failure test has been most commonly used due to its simplicity and use of minimum sample size to achieve an acceptable consumer’s risk level. However, this test can often result in unacceptably high risk for producers as well as a low probability of passing the test even when the product has good reliability. This paper explicitly explores the interrelationship between multiple objectives that are commonly of interest when planning a demonstration test andmore » proposes structured decision-making procedures using a Pareto front approach for selecting an optimal test plan based on simultaneously balancing multiple criteria. Different strategies are suggested for scenarios with different user priorities and graphical tools are developed to help quantify the trade-offs between choices and to facilitate informed decision making. As a result, potential impacts of some subjective user inputs on the final decision are studied to offer insights and useful guidance for general applications.« less

  4. Passive states as optimal inputs for single-jump lossy quantum channels

    NASA Astrophysics Data System (ADS)

    De Palma, Giacomo; Mari, Andrea; Lloyd, Seth; Giovannetti, Vittorio

    2016-06-01

    The passive states of a quantum system minimize the average energy among all the states with a given spectrum. We prove that passive states are the optimal inputs of single-jump lossy quantum channels. These channels arise from a weak interaction of the quantum system of interest with a large Markovian bath in its ground state, such that the interaction Hamiltonian couples only consecutive energy eigenstates of the system. We prove that the output generated by any input state ρ majorizes the output generated by the passive input state ρ0 with the same spectrum of ρ . Then, the output generated by ρ can be obtained applying a random unitary operation to the output generated by ρ0. This is an extension of De Palma et al. [IEEE Trans. Inf. Theory 62, 2895 (2016)], 10.1109/TIT.2016.2547426, where the same result is proved for one-mode bosonic Gaussian channels. We also prove that for finite temperature this optimality property can fail already in a two-level system, where the best input is a coherent superposition of the two energy eigenstates.

  5. The efficiency and budgeting of public hospitals: case study of iran.

    PubMed

    Yusefzadeh, Hasan; Ghaderi, Hossein; Bagherzade, Rafat; Barouni, Mohsen

    2013-05-01

    Hospitals are the most costly and important components of any health care system, so it is important to know their economic values, pay attention to their efficiency and consider factors affecting them. The aim of this study was to assess the technical scale and economic efficiency of hospitals in the West Azerbaijan province of Iran, for which Data Envelopment Analysis (DEA) was used to propose a model for operational budgeting. This study was a descriptive-analysis that was conducted in 2009 and had three inputs and two outputs. Deap2, 1 software was used for data analysis. Slack and radial movements and surplus of inputs were calculated for selected hospitals. Finally, a model was proposed for performance-based budgeting of hospitals and health sectors using the DEA technique. The average scores of technical efficiency, pure technical efficiency (managerial efficiency) and scale efficiency of hospitals were 0.584, 0.782 and 0.771, respectively. In other words the capacity of efficiency promotion in hospitals without any increase in costs and with the same amount of inputs was about 41.5%. Only four hospitals among all hospitals had the maximum level of technical efficiency. Moreover, surplus production factors were evident in these hospitals. Reduction of surplus production factors through comprehensive planning based on the results of the Data Envelopment Analysis can play a major role in cost reduction of hospitals and health sectors. In hospitals with a technical efficiency score of less than one, the original and projected values of inputs were different; resulting in a surplus. Hence, these hospitals should reduce their values of inputs to achieve maximum efficiency and optimal performance. The results of this method was applied to hospitals a benchmark for making decisions about resource allocation; linking budgets to performance results; and controlling and improving hospitals performance.

  6. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    PubMed

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  7. Optimization of light quality from color mixing light-emitting diode systems for general lighting

    NASA Astrophysics Data System (ADS)

    Thorseth, Anders

    2012-03-01

    Given the problem of metamerisms inherent in color mixing in light-emitting diode (LED) systems with more than three distinct colors, a method for optimizing the spectral output of multicolor LED system with regards to standardized light quality parameters has been developed. The composite spectral power distribution from the LEDs are simulated using spectral radiometric measurements of single commercially available LEDs for varying input power, to account for the efficiency droop and other non-linear effects in electrical power vs. light output. The method uses electrical input powers as input parameters in a randomized steepest decent optimization. The resulting spectral power distributions are evaluated with regard to the light quality using the standard characteristics: CIE color rendering index, correlated color temperature and chromaticity distance. The results indicate Pareto optimal boundaries for each system, mapping the capabilities of the simulated lighting systems with regard to the light quality characteristics.

  8. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  9. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  10. Genetic programming assisted stochastic optimization strategies for optimization of glucose to gluconic acid fermentation.

    PubMed

    Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D

    2002-01-01

    This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.

  11. Toward a More Efficient Implementation of Antifibrillation Pacing

    PubMed Central

    Wilson, Dan; Moehlis, Jeff

    2016-01-01

    We devise a methodology to determine an optimal pattern of inputs to synchronize firing patterns of cardiac cells which only requires the ability to measure action potential durations in individual cells. In numerical bidomain simulations, the resulting synchronizing inputs are shown to terminate spiral waves with a higher probability than comparable inputs that do not synchronize the cells as strongly. These results suggest that designing stimuli which promote synchronization in cardiac tissue could improve the success rate of defibrillation, and point towards novel strategies for optimizing antifibrillation pacing. PMID:27391010

  12. Stochastic multi-objective auto-optimization for resource allocation decision-making in fixed-input health systems.

    PubMed

    Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C

    2017-06-01

    The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.

  13. Depth optimal sorting networks resistant to k passive faults

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piotrow, M.

    In this paper, we study the problem of constructing a sorting network that is tolerant to faults and whose running time (i.e. depth) is as small as possible. We consider the scenario of worst-case comparator faults and follow the model of passive comparator failure proposed by Yao and Yao, in which a faulty comparator outputs directly its inputs without comparison. Our main result is the first construction of an N-input, k-fault-tolerant sorting network that is of an asymptotically optimal depth {theta}(log N+k). That improves over the recent result of Leighton and Ma, whose network is of depth O(log N +more » k log log N/log k). Actually, we present a fault-tolerant correction network that can be added after any N-input sorting network to correct its output in the presence of at most k faulty comparators. Since the depth of the network is O(log N + k) and the constants hidden behind the {open_quotes}O{close_quotes} notation are not big, the construction can be of practical use. Developing the techniques necessary to show the main result, we construct a fault-tolerant network for the insertion problem. As a by-product, we get an N-input, O(log N)-depth INSERT-network that is tolerant to random faults, thereby answering a question posed by Ma in his PhD thesis. The results are based on a new notion of constant delay comparator networks, that is, networks in which each register is used (compared) only in a period of time of a constant length. Copies of such networks can be put one after another with only a constant increase in depth per copy.« less

  14. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  15. Design and optimization of input shapers for liquid slosh suppression

    NASA Astrophysics Data System (ADS)

    Aboel-Hassan, Ameen; Arafa, Mustafa; Nassef, Ashraf

    2009-02-01

    The need for fast maneuvering and accurate positioning of flexible structures poses a control challenge. The inherent flexibility in these lightly damped systems creates large undesirable residual vibrations in response to rapid excitations. Several control approaches have been proposed to tackle this class of problems, of which the input shaping technique is appealing in many aspects. While input shaping has been widely investigated to attenuate residual vibrations in flexible structures, less attention was granted to expand its viability in further applications. The aim of this work is to develop a methodology for applying input shaping techniques to suppress sloshing effects in open moving containers to facilitate safe and fast point-to-point movements. The liquid behavior is modeled using finite element analysis. The input shaper parameters are optimized to find the commands that would result in minimum residual vibration. Other objectives, such as improved robustness, and motion constraints such as deflection limiting are also addressed in the optimization scheme. Numerical results are verified on an experimental setup consisting of a small motor-driven water tank undergoing rectilinear motion, while measuring both the tank motion and free surface displacement of the water. The results obtained suggest that input shaping is an effective method for liquid slosh suppression.

  16. Photoenhanced anaerobic digestion of organic acids

    DOEpatents

    Weaver, Paul F.

    1990-01-01

    A process is described for rapid conversion of organic acids and alcohols anaerobic digesters into hydrogen and carbon dioxide, the optimal precursor substrates for production of methane. The process includes addition of photosynthetic bacteria to the digester and exposure of the bacteria to radiant energy (e.g., solar energy). The process also increases the pH stability of the digester to prevent failure of the digester. Preferred substrates for photosynthetic bacteria are the organic acid and alcohol waste products of fermentative bacteria. In mixed culture with methanogenic bacteria or in defined co-culture with non-aceticlastic methanogenic bacteria, photosynthetic bacteria are capable of facilitating the conversion or organic acids and alcohols into methane with low levels of light energy input.

  17. Hadron production experiments

    NASA Astrophysics Data System (ADS)

    Popov, Boris A.

    2013-02-01

    The HARP and NA61/SHINE hadroproduction experiments as well as their implications for neutrino physics are discussed. HARP measurements have already been used for predictions of neutrino beams in K2K and MiniBooNE/SciBooNE experiments and are also being used to improve the atmospheric neutrino flux predictions and to help in the optimization of neutrino factory and super-beam designs. First measurements released recently by the NA61/SHINE experiment are of significant importance for a precise prediction of the J-PARC neutrino beam used for the T2K experiment. Both HARP and NA61/SHINE experiments provide also a large amount of input for validation and tuning of hadron production models in Monte-Carlo generators.

  18. Engineering of mechanical manufacturing from the cradle to cradle

    NASA Astrophysics Data System (ADS)

    Peralta, M. E.; Aguayo, F.; Lama, J. R.

    2012-04-01

    The sustainability of manufacturing processes lies in industrial planning and productive activity. Industrial plants are characterized by the management of resource (inputs and outputs), processing and conversion processes, which usually are organized in a linear system. Good planning will optimize the manufacturing and promoting the quality of the industrial system. Cradle to Cradle is a new paradigm for engineering and sustainable manufacturing that integrates projects (industrial parks, manufacturing plants, systems and products) in a framework consistent with the environment, adapted to the society and technology and economically viable. To carry it out, we implement this paradigm in the MGE2 (Genomic Model of Eco-innovation and Eco-design), as a methodology for designing and developing products and manufacturing systems with an approach from the cradle to cradle.

  19. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    PubMed

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei

    2018-06-01

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  20. Test-Retest Repeatability of Myocardial Blood Flow Measurements using Rubidium-82 Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Efseaff, Matthew

    Rubidium-82 positron emission tomography (PET) imaging has been proposed for routine myocardial blood flow (MBF) quantification. Few studies have investigated the test-retest repeatability of this method. Same-day repeatability of rest MBF imaging was optimized with a highly automated analysis program using image-derived input functions and a dual spillover correction (SOC). The effects of heterogeneous tracer infusion profiles and subject hemodynamics on test-retest repeatability were investigated at rest and during hyperemic stress. Factors affecting rest MBF repeatability included gender, suspected coronary artery disease, and dual SOC (p < 0.001). The best repeatability coefficient for same-day rest MBF was 0.20 mL/min/g using a six-minute scan-time, iterative reconstruction, dual SOC, resting rate-pressure-product (RPP) adjustment, and a left atrium image-derived input function. The serial study repeatabilities of the optimized protocol in subjects with homogeneous RPPs and tracer infusion profiles was 0.19 and 0.53 mL/min/g at rest and stress, and 0.95 for stress / rest myocardial flow reserve (MFR). Subjects with heterogeneous tracer infusion profiles and hemodynamic conditions had significantly less repeatable MBF measurements at rest, stress, and stress/rest flow reserve (p < 0.05).

  1. A study of the talent training project management for semiconductor industry in Taiwan: the application of a hybrid data envelopment analysis approach.

    PubMed

    Kao, Ling-Jing; Chiu, Shu-Yu; Ko, Hsien-Tang

    2014-01-01

    The purpose of this study is to evaluate the training institution performance and to improve the management of the Manpower Training Project (MTP) administered by the Semiconductor Institute in Taiwan. Much literature assesses the efficiency of an internal training program initiated by a firm, but only little literature studies the efficiency of an external training program led by government. In the study, a hybrid solution of ICA-DEA and ICA-MPI is developed for measuring the efficiency and the productivity growth of each training institution over the period. The technical efficiency change, the technological change, pure technical efficiency change, scale efficiency change, and the total factor productivity change were evaluated according to five inputs and two outputs. According to the results of the study, the training institutions can be classified by their efficiency successfully and the guidelines for the optimal level of input resources can be obtained for each inefficient training institution. The Semiconductor Institute in Taiwan can allocate budget more appropriately and establish withdrawal mechanisms for inefficient training institutions.

  2. Phase-matching of attosecond XUV supercontinuum

    NASA Astrophysics Data System (ADS)

    Gilbertson, Steve; Mashiko, Hiroki; Li, Chengquan; Khan, Sabih; Shakya, Mahendra; Moon, Eric; Chang, Zenghu

    2008-05-01

    Adding a weak second harmonic field to an ellipticity dependent polarization gating field allowed for the production of XUV supercontinua from longer (˜10 fs) input pulses in argon. The spectra support 200 as single isolated pulses. This technique, dubbed double optical gating (DOG), demonstrated a large enhancement of the harmonic yield as compared with polarization gating. These results can be attributed to the reduced depletion of the ground state of the target from the leading edge of the pulse and the increased intensity inside the polarization gate width. Through optimization of the harmonic generation process under the phase matching conditions, we were able to further increase the harmonic flux. The parameters included the target gas pressure, laser focus position, input pulse duration, and polarization gate width. By varying the CE phase of the pulse, we were able to verify that the results were indeed from DOG due to its unique 2 pi dependence on the harmonic spectrum. We were able to extend our results to neon. Its higher ionization potential allowed an extension of the harmonic cutoff for the production of even shorter pulses.

  3. A Study of the Talent Training Project Management for Semiconductor Industry in Taiwan: The Application of a Hybrid Data Envelopment Analysis Approach

    PubMed Central

    Kao, Ling-Jing; Chiu, Shu-Yu; Ko, Hsien-Tang

    2014-01-01

    The purpose of this study is to evaluate the training institution performance and to improve the management of the Manpower Training Project (MTP) administered by the Semiconductor Institute in Taiwan. Much literature assesses the efficiency of an internal training program initiated by a firm, but only little literature studies the efficiency of an external training program led by government. In the study, a hybrid solution of ICA-DEA and ICA-MPI is developed for measuring the efficiency and the productivity growth of each training institution over the period. The technical efficiency change, the technological change, pure technical efficiency change, scale efficiency change, and the total factor productivity change were evaluated according to five inputs and two outputs. According to the results of the study, the training institutions can be classified by their efficiency successfully and the guidelines for the optimal level of input resources can be obtained for each inefficient training institution. The Semiconductor Institute in Taiwan can allocate budget more appropriately and establish withdrawal mechanisms for inefficient training institutions. PMID:24977192

  4. Quality by design case study 1: Design of 5-fluorouracil loaded lipid nanoparticles by the W/O/W double emulsion - Solvent evaporation method.

    PubMed

    Amasya, Gulin; Badilli, Ulya; Aksu, Buket; Tarimci, Nilufer

    2016-03-10

    With Quality by Design (QbD), a systematic approach involving design and development of all production processes to achieve the final product with a predetermined quality, you work within a design space that determines the critical formulation and process parameters. Verification of the quality of the final product is no longer necessary. In the current study, the QbD approach was used in the preparation of lipid nanoparticle formulations to improve skin penetration of 5-Fluorouracil, a widely-used compound for treating non-melanoma skin cancer. 5-Fluorouracil-loaded lipid nanoparticles were prepared by the W/O/W double emulsion - solvent evaporation method. Artificial neural network software was used to evaluate the data obtained from the lipid nanoparticle formulations, to establish the design space, and to optimize the formulations. Two different artificial neural network models were developed. The limit values of the design space of the inputs and outputs obtained by both models were found to be within the knowledge space. The optimal formulations recommended by the models were prepared and the critical quality attributes belonging to those formulations were assigned. The experimental results remained within the design space limit values. Consequently, optimal formulations with the critical quality attributes determined to achieve the Quality Target Product Profile were successfully obtained within the design space by following the QbD steps. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Maximally informative pairwise interactions in networks

    PubMed Central

    Fitzgerald, Jeffrey D.; Sharpee, Tatyana O.

    2010-01-01

    Several types of biological networks have recently been shown to be accurately described by a maximum entropy model with pairwise interactions, also known as the Ising model. Here we present an approach for finding the optimal mappings between input signals and network states that allow the network to convey the maximal information about input signals drawn from a given distribution. This mapping also produces a set of linear equations for calculating the optimal Ising-model coupling constants, as well as geometric properties that indicate the applicability of the pairwise Ising model. We show that the optimal pairwise interactions are on average zero for Gaussian and uniformly distributed inputs, whereas they are nonzero for inputs approximating those in natural environments. These nonzero network interactions are predicted to increase in strength as the noise in the response functions of each network node increases. This approach also suggests ways for how interactions with unmeasured parts of the network can be inferred from the parameters of response functions for the measured network nodes. PMID:19905153

  6. Energy and nutrient cycling in pig production systems

    NASA Astrophysics Data System (ADS)

    Lammers, Peter J.

    United States pig production is centered in Iowa and is a major influence on the economic and ecological condition of that community. A pig production system includes buildings, equipment, production of feed ingredients, feed processing, and nutrient management. Although feed is the largest single input into a pig production system, nearly 30% of the non-solar energy use of a conventional--mechanically ventilated buildings with liquid manure handling--pig production system is associated with constructing and operating the pig facility. Using bedded hoop barns for gestating sows and grow-finish pigs reduces construction resource use and construction costs of pig production systems. The hoop based systems also requires approximately 40% less non-solar energy to operate as the conventional system although hoop barn-based systems may require more feed. The total non-solar energy input associated with one 136 kg pig produced in a conventional farrow-to-finish system in Iowa and fed a typical corn-soybean meal diet that includes synthetic lysine and exogenous phytase is 967.9 MJ. Consuming the non-solar energy results in emissions of 79.8 kg CO2 equivalents. Alternatively producing the same pig in a system using bedded hoop barns for gestating sows and grow-finish pigs requires 939.8 MJ/pig and results in emission of 70.2 kg CO2 equivalents, a reduction of 3 and 12% respectively. Hoop barn-based swine production systems can be managed to use similar or less resources than conventional confinement systems. As we strive to optimally allocate non-solar energy reserves and limited resources, support for examining and improving alternative systems is warranted.

  7. Low input and intensified crop production systems effects on soil health and environment

    USDA-ARS?s Scientific Manuscript database

    The material in this chapter covers the concepts of "low-input" and "intensified" production systems in the context of input intensity and sustainability. Research-based case studies are presented that draw out the practicalities of implementing production practices on an input intensity gradient fr...

  8. From field to region yield predictions in response to pedo-climatic variations in Eastern Canada

    NASA Astrophysics Data System (ADS)

    JÉGO, G.; Pattey, E.; Liu, J.

    2013-12-01

    The increase in global population coupled with new pressures to produce energy and bioproducts from agricultural land requires an increase in crop productivity. However, the influence of climate and soil variations on crop production and environmental performance is not fully understood and accounted for to define more sustainable and economical management strategies. Regional crop modeling can be a great tool for understanding the impact of climate variations on crop production, for planning grain handling and for assessing the impact of agriculture on the environment, but it is often limited by the availability of input data. The STICS ("Simulateur mulTIdisciplinaire pour les Cultures Standard") crop model, developed by INRA (France) is a functional crop model which has a built-in module to optimize several input parameters by minimizing the difference between calculated and measured output variables, such as Leaf Area Index (LAI). STICS crop model was adapted to the short growing season of the Mixedwood Plains Ecozone using field experiments results, to predict biomass and yield of soybean, spring wheat and corn. To minimize the numbers of inference required for regional applications, 'generic' cultivars rather than specific ones have been calibrated in STICS. After the calibration of several model parameters, the root mean square error (RMSE) of yield and biomass predictions ranged from 10% to 30% for the three crops. A bit more scattering was obtained for LAI (20%

  9. Spatial optimization of an ideal wind energy system as a response to the intermittency of renewable energies?

    NASA Astrophysics Data System (ADS)

    Lassonde, Sylvain; Boucher, Olivier; Breon, François-Marie; Tobin, Isabelle; Vautard, Robert

    2016-04-01

    The share of renewable energies in the mix of electricity production is increasing worldwide. This trend is driven by environmental and economic policies aiming at a reduction of greenhouse gas emissions and an improvement of energy security. It is expected to continue in the forthcoming years and decades. Electricity production from renewables is related to weather and climate factors such as the diurnal and seasonal cycles of sunlight and wind, but is also linked to variability on all time scales. The intermittency in the renewable electricity production (solar, wind power) could eventually hinder their future deployment. Intermittency is indeed a challenge as demand and supply of electricity need to be balanced at any time. This challenge can be addressed by the deployment of an overcapacity in power generation (from renewable and/or thermal sources), a large-scale energy storage system and/or improved management of the demand. The main goal of this study is to optimize a hypothetical renewable energy system at the French and European scales in order to investigate if spatial diversity of the production (here electricity from wind energy) could be a response to the intermittency. We use ECMWF (European Centre for Medium-Range Weather Forecasts) ERA-interim meteorological reanalysis and meteorological fields from the Weather Research and Forecasts (WRF) model to estimate the potential for wind power generation. Electricity demand and production are provided by the French electricity network (RTE) at the scale of administrative regions for years 2013 and 2014. Firstly we will show how the simulated production of wind power compares against the measured production at the national and regional scale. Several modelling and bias correction methods of wind power production will be discussed. Secondly, we will present results from an optimization procedure that aims to minimize some measure of the intermittency of wind energy. For instance we estimate the optimal distribution between French regions (with or without cross-border inputs) that minimizes the impact of low-production periods computed in a running mean sense and its sensitivity to the period considered. We will also assess which meteorological situations are the most problematic over the 35-year ERA-interim climatology(1980-2015).

  10. Approximation of discrete-time LQG compensators for distributed systems with boundary input and unbounded measurement

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1987-01-01

    The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.

  11. The minimal amount of starting DNA for Agilent’s hybrid capture-based targeted massively parallel sequencing

    PubMed Central

    Chung, Jongsuk; Son, Dae-Soon; Jeon, Hyo-Jeong; Kim, Kyoung-Mee; Park, Gahee; Ryu, Gyu Ha; Park, Woong-Yang; Park, Donghyun

    2016-01-01

    Targeted capture massively parallel sequencing is increasingly being used in clinical settings, and as costs continue to decline, use of this technology may become routine in health care. However, a limited amount of tissue has often been a challenge in meeting quality requirements. To offer a practical guideline for the minimum amount of input DNA for targeted sequencing, we optimized and evaluated the performance of targeted sequencing depending on the input DNA amount. First, using various amounts of input DNA, we compared commercially available library construction kits and selected Agilent’s SureSelect-XT and KAPA Biosystems’ Hyper Prep kits as the kits most compatible with targeted deep sequencing using Agilent’s SureSelect custom capture. Then, we optimized the adapter ligation conditions of the Hyper Prep kit to improve library construction efficiency and adapted multiplexed hybrid selection to reduce the cost of sequencing. In this study, we systematically evaluated the performance of the optimized protocol depending on the amount of input DNA, ranging from 6.25 to 200 ng, suggesting the minimal input DNA amounts based on coverage depths required for specific applications. PMID:27220682

  12. Bioconversion of renewable resources into lactic acid: an industrial view.

    PubMed

    Yadav, A K; Chaudhari, A B; Kothari, R M

    2011-03-01

    Lactic acid, an anaerobic product of glycolysis, can be theoretically produced by synthetic route; however, it is commercially produced by homo-fermentative batch mode of operations. Factors affecting its production and strategies improving it are considered while devising an optimized protocol. Although a hetero-fermentative mode of production exists, it is rarely used for commercial production. Attempts to use Rhizopus sp. for lactic acid production through either hetero-fermentative or thermophilic conditions were not economical. Since almost 70% of the cost of its production is accounted by raw materials, R & D efforts are still focused to find economically attractive agri-products to serve as sources of carbon and complex nitrogen inputs to meet fastidious nutrient needs for microbial growth and lactic acid production. Therefore, need exists for using multi-pronged strategies for higher productivity. Its present production and consumption scenario is examined. Its optically active isomers and chemical structure permit its use for the production of several industrially important chemicals, health products (probiotics), food preservatives, and bio-plastics. In addition, its salts and esters appear to have a variety of applications.

  13. Program document for Energy Systems Optimization Program 2 (ESOP2). Volume 1: Engineering manual

    NASA Technical Reports Server (NTRS)

    Hamil, R. G.; Ferden, S. L.

    1977-01-01

    The Energy Systems Optimization Program, which is used to provide analyses of Modular Integrated Utility Systems (MIUS), is discussed. Modifications to the input format to allow modular inputs in specified blocks of data are described. An optimization feature which enables the program to search automatically for the minimum value of one parameter while varying the value of other parameters is reported. New program option flags for prime mover analyses and solar energy for space heating and domestic hot water are also covered.

  14. Investigation of 16 × 10 Gbps DWDM System Based on Optimized Semiconductor Optical Amplifier

    NASA Astrophysics Data System (ADS)

    Rani, Aruna; Dewra, Sanjeev

    2017-08-01

    This paper investigates the performance of an optical system based on optimized semiconductor optical amplifier (SOA) at 160 Gbps with 0.8 nm channel spacing. Transmission distances up to 280 km at -30 dBm input signal power and up to 247 km at -32 dBm input signal power with acceptable bit error rate (BER) and Q-factor are examined. It is also analyzed that the transmission distance up to 292 km has been covered at -28 dBm input signal power using Dispersion Shifted (DS)-Normal fiber without any power compensation methods.

  15. Optimal controller design for high performance aircraft undergoing large disturbance angles

    NASA Technical Reports Server (NTRS)

    Rhoten, R. P.

    1974-01-01

    An examination of two aircraft controller structures applicable to on-line implementation was conducted. The two controllers, a linear regulator model follower and an inner-product model follower, were applied to the lateral dynamics of the F8-C aircraft. For the purposes of this research effort, the lateral dynamics of the F8-C aircraft were considered. The controller designs were evaluated for four flight conditions. Additionally, effects of pilot input, rapid variation of flight condition and control surface rate and magnitude deflection limits were considered.

  16. Backup agreements with penalty scheme under supply disruptions

    NASA Astrophysics Data System (ADS)

    Hou, Jing; Zhao, Lindu

    2012-05-01

    This article considers a supply chain for a single product involving one retailer and two independent suppliers, when the main supplier might fail to supply the products, the backup supplier can always supply the products at a higher price. The retailer could use the backup supplier as a regular provider or a stand-by source by reserving some products at the supplier. A backup agreement with penalty scheme is constructed between the retailer and the backup supplier to mitigate the supply disruptions and the demand uncertainty. The expected profit functions and the optimal decisions of the two players are derived through a sequential optimisation process. Then, the sensitivity of two players' expected profits to various input factors is examined through numerical examples. The impacts of the disruption probability and the demand uncertainty on the backup agreement are also investigated, which could provide guideline for how to use each sourcing method.

  17. Texas two-step: a framework for optimal multi-input single-output deconvolution.

    PubMed

    Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G

    2007-11-01

    Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.

  18. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  19. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.

    PubMed

    Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt

    2008-07-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.

  20. Optimal nonlinear codes for the perception of natural colours.

    PubMed

    von der Twer, T; MacLeod, D I

    2001-08-01

    We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.

  1. Imer-product array processor for retrieval of stored images represented by bipolar binary (+1,-1) pixels using partial input trinary pixels represented by (+1,-1)

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Awwal, Abdul A. S. (Inventor); Karim, Mohammad A. (Inventor)

    1993-01-01

    An inner-product array processor is provided with thresholding of the inner product during each iteration to make more significant the inner product employed in estimating a vector to be used as the input vector for the next iteration. While stored vectors and estimated vectors are represented in bipolar binary (1,-1), only those elements of an initial partial input vector that are believed to be common with those of a stored vector are represented in bipolar binary; the remaining elements of a partial input vector are set to 0. This mode of representation, in which the known elements of a partial input vector are in bipolar binary form and the remaining elements are set equal to 0, is referred to as trinary representation. The initial inner products corresponding to the partial input vector will then be equal to the number of known elements. Inner-product thresholding is applied to accelerate convergence and to avoid convergence to a negative input product.

  2. Experimental evidence for the immediate impact of fertilization and irrigation upon the plant and invertebrate communities of mountain grasslands

    PubMed Central

    Andrey, Aline; Humbert, Jean-Yves; Pernollet, Claire; Arlettaz, Raphaël

    2014-01-01

    The response of montane and subalpine hay meadow plant and arthropod communities to the application of liquid manure and aerial irrigation – two novel, rapidly spreading management practices – remains poorly understood, which hampers the formulation of best practice management recommendations for both hay production and biodiversity preservation. In these nutrient-poor mountain grasslands, a moderate management regime could enhance overall conditions for biodiversity. This study experimentally assessed, at the site scale, among low-input montane and subalpine meadows, the short-term effects (1 year) of a moderate intensification (slurry fertilization: 26.7–53.3 kg N·ha−1·year−1; irrigation with sprinklers: 20 mm·week−1; singly or combined together) on plant species richness, vegetation structure, hay production, and arthropod abundance and biomass in the inner European Alps (Valais, SW Switzerland). Results show that (1) montane and subalpine hay meadow ecological communities respond very rapidly to an intensification of management practices; (2) on a short-term basis, a moderate intensification of very low-input hay meadows has positive effects on plant species richness, vegetation structure, hay production, and arthropod abundance and biomass; (3) vegetation structure is likely to be the key factor limiting arthropod abundance and biomass. Our ongoing experiments will in the longer term identify which level of management intensity achieves an optimal balance between biodiversity and hay production. PMID:25360290

  3. Aerospace engineering design by systematic decomposition and multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.

    1984-01-01

    A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.

  4. Characterization of real-world vibration sources with a view toward optimal energy harvesting architectures

    NASA Astrophysics Data System (ADS)

    Rantz, Robert; Roundy, Shad

    2016-04-01

    A tremendous amount of research has been performed on the design and analysis of vibration energy harvester architectures with the goal of optimizing power output; most studies assume idealized input vibrations without paying much attention to whether such idealizations are broadly representative of real sources. These "idealized input signals" are typically derived from the expected nature of the vibrations produced from a given source. Little work has been done on corroborating these expectations by virtue of compiling a comprehensive list of vibration signals organized by detailed classifications. Vibration data representing 333 signals were collected from the NiPS Laboratory "Real Vibration" database, processed, and categorized according to the source of the signal (e.g. animal, machine, etc.), the number of dominant frequencies, the nature of the dominant frequencies (e.g. stationary, band-limited noise, etc.), and other metrics. By categorizing signals in this way, the set of idealized vibration inputs commonly assumed for harvester input can be corroborated and refined, and heretofore overlooked vibration input types have motivation for investigation. An initial qualitative analysis of vibration signals has been undertaken with the goal of determining how often a standard linear oscillator based harvester is likely the optimal architecture, and how often a nonlinear harvester with a cubic stiffness function might provide improvement. Although preliminary, the analysis indicates that in at least 23% of cases, a linear harvester is likely optimal and in no more than 53% of cases would a nonlinear cubic stiffness based harvester provide improvement.

  5. An efficient matrix product operator representation of the quantum chemical Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Sebastian, E-mail: sebastian.keller@phys.chem.ethz.ch; Reiher, Markus, E-mail: markus.reiher@phys.chem.ethz.ch; Dolfi, Michele, E-mail: dolfim@phys.ethz.ch

    We describe how to efficiently construct the quantum chemical Hamiltonian operator in matrix product form. We present its implementation as a density matrix renormalization group (DMRG) algorithm for quantum chemical applications. Existing implementations of DMRG for quantum chemistry are based on the traditional formulation of the method, which was developed from the point of view of Hilbert space decimation and attained higher performance compared to straightforward implementations of matrix product based DMRG. The latter variationally optimizes a class of ansatz states known as matrix product states, where operators are correspondingly represented as matrix product operators (MPOs). The MPO construction schememore » presented here eliminates the previous performance disadvantages while retaining the additional flexibility provided by a matrix product approach, for example, the specification of expectation values becomes an input parameter. In this way, MPOs for different symmetries — abelian and non-abelian — and different relativistic and non-relativistic models may be solved by an otherwise unmodified program.« less

  6. Calibration of a biome-biogeochemical cycles model for modeling the net primary production of teak forests through inverse modeling of remotely sensed data

    NASA Astrophysics Data System (ADS)

    Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon

    2011-01-01

    In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.

  7. Production Function Geometry with "Knightian" Total Product

    ERIC Educational Resources Information Center

    Truett, Dale B.; Truett, Lila J.

    2007-01-01

    Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…

  8. Capabilities and applications of the Program to Optimize Simulated Trajectories (POST). Program summary document

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Stevenson, R.

    1977-01-01

    The capabilities and applications of the three-degree-of-freedom (3DOF) version and the six-degree-of-freedom (6DOF) version of the Program to Optimize Simulated Trajectories (POST) are summarized. The document supplements the detailed program manuals by providing additional information that motivates and clarifies basic capabilities, input procedures, applications and computer requirements of these programs. The information will enable prospective users to evaluate the programs, and to determine if they are applicable to their problems. Enough information is given to enable managerial personnel to evaluate the capabilities of the programs and describes the POST structure, formulation, input and output procedures, sample cases, and computer requirements. The report also provides answers to basic questions concerning planet and vehicle modeling, simulation accuracy, optimization capabilities, and general input rules. Several sample cases are presented.

  9. Increasing water productivity, nitrogen economy, and grain yield of rice by water saving irrigation and fertilizer-N management.

    PubMed

    Aziz, Omar; Hussain, Saddam; Rizwan, Muhammad; Riaz, Muhammad; Bashir, Saqib; Lin, Lirong; Mehmood, Sajid; Imran, Muhammad; Yaseen, Rizwan; Lu, Guoan

    2018-06-01

    The looming water resources worldwide necessitate the development of water-saving technologies in rice production. An open greenhouse experiment was conducted on rice during the summer season of 2016 at Huazhong Agricultural University, Wuhan, China, in order to study the influence of irrigation methods and nitrogen (N) inputs on water productivity, N economy, and grain yield of rice. Two irrigation methods, viz. conventional irrigation (CI) and "thin-shallow-moist-dry" irrigation (TSMDI), and three levels of nitrogen, viz. 0 kg N ha -1 (N 0 ), 90 kg N ha -1 (N 1 ), and 180 kg N ha -1 (N 2 ), were examined with three replications. Study data indicated that no significant water by nitrogen interaction on grain yield, biomass, water productivity, N uptake, NUE, and fertilizer N balance was observed. Results revealed that TSMDI method showed significantly higher water productivity and irrigation water applications were reduced by 17.49% in TSMDI compared to CI. Thus, TSMDI enhanced root growth and offered significantly greater water saving along with getting more grain yield compared to CI. Nitrogen tracer ( 15 N) technique accurately assessed the absorption and distribution of added N in the soil crop environment and divulge higher nitrogen use efficiency (NUE) influenced by TSMDI. At the same N inputs, the TSMDI was the optimal method to minimize nitrogen leaching loss by decreasing water leakage about 18.63%, which are beneficial for the ecological environment.

  10. SCI model structure determination program (OSR) user's guide. [optimal subset regression

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.

  11. Microbial enzymes: industrial progress in 21st century.

    PubMed

    Singh, Rajendra; Kumar, Manoj; Mittal, Anshumali; Mehta, Praveen Kumar

    2016-12-01

    Biocatalytic potential of microorganisms have been employed for centuries to produce bread, wine, vinegar and other common products without understanding the biochemical basis of their ingredients. Microbial enzymes have gained interest for their widespread uses in industries and medicine owing to their stability, catalytic activity, and ease of production and optimization than plant and animal enzymes. The use of enzymes in various industries (e.g., food, agriculture, chemicals, and pharmaceuticals) is increasing rapidly due to reduced processing time, low energy input, cost effectiveness, nontoxic and eco-friendly characteristics. Microbial enzymes are capable of degrading toxic chemical compounds of industrial and domestic wastes (phenolic compounds, nitriles, amines etc.) either via degradation or conversion. Here in this review, we highlight and discuss current technical and scientific involvement of microorganisms in enzyme production and their present status in worldwide enzyme market.

  12. Development of mathematical models and optimization of the process parameters of laser surface hardened EN25 steel using elitist non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.

    2018-02-01

    The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.

  13. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  14. An Explicit Linear Filtering Solution for the Optimization of Guidance Systems with Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Stewart, Elwood C.

    1961-01-01

    The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.

  15. LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints

    NASA Technical Reports Server (NTRS)

    Swei, Sean S.M.; Ayoubi, Mohammad A.

    2017-01-01

    This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.

  16. Optimal discrete-time LQR problems for parabolic systems with unbounded input: Approximation and convergence

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1988-01-01

    An abstract approximation and convergence theory for the closed-loop solution of discrete-time linear-quadratic regulator problems for parabolic systems with unbounded input is developed. Under relatively mild stabilizability and detectability assumptions, functional analytic, operator techniques are used to demonstrate the norm convergence of Galerkin-based approximations to the optimal feedback control gains. The application of the general theory to a class of abstract boundary control systems is considered. Two examples, one involving the Neumann boundary control of a one-dimensional heat equation, and the other, the vibration control of a cantilevered viscoelastic beam via shear input at the free end, are discussed.

  17. User input in iterative design for prevention product development: leveraging interdisciplinary methods to optimize effectiveness.

    PubMed

    Guthrie, Kate M; Rosen, Rochelle K; Vargas, Sara E; Guillen, Melissa; Steger, Arielle L; Getz, Melissa L; Smith, Kelley A; Ramirez, Jaime J; Kojic, Erna M

    2017-10-01

    The development of HIV-preventive topical vaginal microbicides has been challenged by a lack of sufficient adherence in later stage clinical trials to confidently evaluate effectiveness. This dilemma has highlighted the need to integrate translational research earlier in the drug development process, essentially applying behavioral science to facilitate the advances of basic science with respect to the uptake and use of biomedical prevention technologies. In the last several years, there has been an increasing recognition that the user experience, specifically the sensory experience, as well as the role of meaning-making elicited by those sensations, may play a more substantive role than previously thought. Importantly, the role of the user-their sensory perceptions, their judgements of those experiences, and their willingness to use a product-is critical in product uptake and consistent use post-marketing, ultimately realizing gains in global public health. Specifically, a successful prevention product requires an efficacious drug, an efficient drug delivery system, and an effective user. We present an integrated iterative drug development and user experience evaluation method to illustrate how user-centered formulation design can be iterated from the early stages of preclinical development to leverage the user experience. Integrating the user and their product experiences into the formulation design process may help optimize both the efficiency of drug delivery and the effectiveness of the user.

  18. Optimizing rice yields while minimizing yield-scaled global warming potential.

    PubMed

    Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A

    2014-05-01

    To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. © 2013 John Wiley & Sons Ltd.

  19. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Maximizing root/rhizosphere efficiency to improve crop productivity and nutrient use efficiency in intensive agriculture of China.

    PubMed

    Shen, Jianbo; Li, Chunjian; Mi, Guohua; Li, Long; Yuan, Lixing; Jiang, Rongfeng; Zhang, Fusuo

    2013-03-01

    Root and rhizosphere research has been conducted for many decades, but the underlying strategy of root/rhizosphere processes and management in intensive cropping systems remain largely to be determined. Improved grain production to meet the food demand of an increasing population has been highly dependent on chemical fertilizer input based on the traditionally assumed notion of 'high input, high output', which results in overuse of fertilizers but ignores the biological potential of roots or rhizosphere for efficient mobilization and acquisition of soil nutrients. Root exploration in soil nutrient resources and root-induced rhizosphere processes plays an important role in controlling nutrient transformation, efficient nutrient acquisition and use, and thus crop productivity. The efficiency of root/rhizosphere in terms of improved nutrient mobilization, acquisition, and use can be fully exploited by: (1) manipulating root growth (i.e. root development and size, root system architecture, and distribution); (2) regulating rhizosphere processes (i.e. rhizosphere acidification, organic anion and acid phosphatase exudation, localized application of nutrients, rhizosphere interactions, and use of efficient crop genotypes); and (3) optimizing root zone management to synchronize root growth and soil nutrient supply with demand of nutrients in cropping systems. Experiments have shown that root/rhizosphere management is an effective approach to increase both nutrient use efficiency and crop productivity for sustainable crop production. The objectives of this paper are to summarize the principles of root/rhizosphere management and provide an overview of some successful case studies on how to exploit the biological potential of root system and rhizosphere processes to improve crop productivity and nutrient use efficiency.

  1. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  2. Strategies for optimizing algal biology for enhanced biomass production

    DOE PAGES

    Barry, Amanda N.; Starkenburg, Shawn R.; Sayre, Richard T.

    2015-02-02

    One of the most environmentally sustainable ways to produce high-energy density (oils) feed stocks for the production of liquid transportation fuels is from biomass. Photosynthetic carbon capture combined with biomass combustion (point source) and subsequent carbon capture and sequestration has also been proposed in the intergovernmental panel on climate change report as one of the most effective and economical strategies to remediate atmospheric greenhouse gases. To maximize photosynthetic carbon capture efficiency and energy-return-on-investment, we must develop biomass production systems that achieve the greatest yields with the lowest inputs. Numerous studies have demonstrated that microalgae have among the greatest potentials formore » biomass production. This is in part due to the fact that all alga cells are photoautotrophic, they have active carbon concentrating mechanisms to increase photosynthetic productivity, and all the biomass is harvestable unlike plants. All photosynthetic organisms, however, convert only a fraction of the solar energy they capture into chemical energy (reduced carbon or biomass). To increase aerial carbon capture rates and biomass productivity, it will be necessary to identify the most robust algal strains and increase their biomass production efficiency often by genetic manipulation. We review recent large-scale efforts to identify the best biomass producing strains and metabolic engineering strategies to improve aerial productivity. In addition, these strategies include optimization of photosynthetic light-harvesting antenna size to increase energy capture and conversion efficiency and the potential development of advanced molecular breeding techniques. To date, these strategies have resulted in up to twofold increases in biomass productivity.« less

  3. Environmental statistics and optimal regulation

    NASA Astrophysics Data System (ADS)

    Sivak, David; Thomson, Matt

    2015-03-01

    The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  4. Optimizing Land and Water Use at the Local Level to Enhance Global Food Security through Virtual Resources Trade in the World

    NASA Astrophysics Data System (ADS)

    Cai, X.; Zhang, X.; Zhu, T.

    2014-12-01

    Global food security is constrained by local and regional land and water availability, as well as other agricultural input limitations and inappropriate national and global regulations. In a theoretical context, this study assumes that optimal water and land uses in local food production to maximize food security and social welfare at the global level can be driven by global trade. It follows the context of "virtual resources trade", i.e., utilizing international trade of agricultural commodities to reduce dependency on local resources, and achieves land and water savings in the world. An optimization model based on the partial equilibrium of agriculture is developed for the analysis, including local commodity production and land and water resources constraints, demand by country, and global food market. Through the model, the marginal values (MVs) of social welfare for water and land at the level of so-called food production units (i.e., sub-basins with similar agricultural production conditions) are derived and mapped in the world. In this personation, we will introduce the model structure, explain the meaning of MVs at the local level and their distribution around the world, and discuss the policy implications for global communities to enhance global food security. In particular, we will examine the economic values of water and land under different world targets of food security (e.g., number of malnourished population or children in a future year). In addition, we will also discuss the opportunities on data to improve such global modeling exercises.

  5. Towards a globally optimized crop distribution: Integrating water use, nutrition, and economic value

    NASA Astrophysics Data System (ADS)

    Davis, K. F.; Seveso, A.; Rulli, M. C.; D'Odorico, P.

    2016-12-01

    Human demand for crop production is expected to increase substantially in the coming decades as a result of population growth, richer diets and biofuel use. In order for food production to keep pace, unprecedented amounts of resources - water, fertilizers, energy - will be required. This has led to calls for `sustainable intensification' in which yields are increased on existing croplands while seeking to minimize impacts on water and other agricultural resources. Recent studies have quantified aspects of this, showing that there is a large potential to improve crop yields and increase harvest frequencies to better meet human demand. Though promising, both solutions would necessitate large additional inputs of water and fertilizer in order to be achieved under current technologies. However, the question of whether the current distribution of crops is, in fact, the best for realizing sustainable production has not been considered to date. To this end, we ask: Is it possible to increase crop production and economic value while minimizing water demand by simply growing crops where soil and climate conditions are best suited? Here we use maps of yields and evapotranspiration for 14 major food crops to identify differences between current crop distributions and where they can most suitably be planted. By redistributing crops across currently cultivated lands, we determine the potential improvements in calorie (+12%) and protein (+51%) production, economic output (+41%) and water demand (-5%). This approach can also incorporate the impact of future climate on cropland suitability, and as such, be used to provide optimized cropping patterns under climate change. Thus, our study provides a novel tool towards achieving sustainable intensification that can be used to recommend optimal crop distributions in the face of a changing climate while simultaneously accounting for food security, freshwater resources, and livelihoods.

  6. Robust Controller for Turbulent and Convective Boundary Layers

    DTIC Science & Technology

    2006-08-01

    filter and an optimal regulator. The Kalman filter equation and the optimal regulator equation corresponding to the state-space equations, (2.20), are...separate steady-state algebraic Riccati equations. The Kalman filter is used here as a state observer rather than as an estimator since no noises are...2001) which will not be repeated here. For robustness, in the design, the Kalman filter input matrix G has been set equal to the control input

  7. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  8. Optimized mode-field adapter for low-loss fused fiber bundle signal and pump combiners

    NASA Astrophysics Data System (ADS)

    Koška, Pavel; Baravets, Yauhen; Peterka, Pavel; Písařík, Michael; Bohata, Jan

    2015-03-01

    In our contribution we report novel mode field adapter incorporated inside bundled tapered pump and signal combiner. Pump and signal combiners are crucial component of contemporary double clad high power fiber lasers. Proposed combiner allows simultaneous matching to single mode core on input and output. We used advanced optimization techniques to match the combiner to a single mode core simultaneously on input and output and to minimalize losses of the combiner signal branch. We designed two arrangements of combiners' mode field adapters. Our numerical simulations estimates losses in signal branches of optimized combiners of 0.23 dB for the first design and 0.16 dB for the second design for SMF-28 input fiber and SMF-28 matched output double clad fiber for the wavelength of 2000 nm. The splice losses of the actual combiner are expected to be even lower thanks to dopant diffusion during the splicing process.

  9. Key issues in life cycle assessment of ethanol production from lignocellulosic biomass: Challenges and perspectives.

    PubMed

    Singh, Anoop; Pant, Deepak; Korres, Nicholas E; Nizami, Abdul-Sattar; Prasad, Shiv; Murphy, Jerry D

    2010-07-01

    Progressive depletion of conventional fossil fuels with increasing energy consumption and greenhouse gas (GHG) emissions have led to a move towards renewable and sustainable energy sources. Lignocellulosic biomass is available in massive quantities and provides enormous potential for bioethanol production. However, to ascertain optimal biofuel strategies, it is necessary to take into account environmental impacts from cradle to grave. Life cycle assessment (LCA) techniques allow detailed analysis of material and energy fluxes on regional and global scales. This includes indirect inputs to the production process and associated wastes and emissions, and the downstream fate of products in the future. At the same time if not used properly, LCA can lead to incorrect and inappropriate actions on the part of industry and/or policy makers. This paper aims to list key issues for quantifying the use of resources and releases to the environment associated with the entire life cycle of lignocellulosic bioethanol production. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Design of experiment analysis of CO2 dielectric barrier discharge conditions on CO production

    NASA Astrophysics Data System (ADS)

    Becker, Markus; Ponduri, Srinath; Engeln, Richard; van de Sanden, Richard; Loffhagen, Detlef

    2016-09-01

    Dielectric barrier discharges (DBD) are frequently used for the generation of CO from CO2 which is of particular interest for syngas production. It has been found by means of fluid modelling in that the CO2 conversion frequency in a CO2 DBD depends linearly on the specific energy input (SEI) while the energy efficiency of CO production is only weakly dependent on the SEI. Here, the same numerical model as in is applied to study systematically the influence of gas pressure, applied voltage amplitude and frequency on the CO2 conversion frequency and the energy efficiency of CO production based on a 2-level 3-factor full factorial experimental design. It is found that the operating conditions of the CO2 DBD for CO production can be chosen to either have an optimal throughput or a better energy efficiency. This work was partly supported by the German Research Foundation within the Collaborative Research Centre Transregio 24.

  11. Global and regional phosphorus budgets in agricultural systems and their implications for phosphorus-use efficiency

    NASA Astrophysics Data System (ADS)

    Lun, Fei; Liu, Junguo; Ciais, Philippe; Nesme, Thomas; Chang, Jinfeng; Wang, Rong; Goll, Daniel; Sardans, Jordi; Peñuelas, Josep; Obersteiner, Michael

    2018-01-01

    The application of phosphorus (P) fertilizer to agricultural soils increased by 3.2 % annually from 2002 to 2010. We quantified in detail the P inputs and outputs of cropland and pasture and the P fluxes through human and livestock consumers of agricultural products on global, regional, and national scales from 2002 to 2010. Globally, half of the total P inputs into agricultural systems accumulated in agricultural soils during this period, with the rest lost to bodies of water through complex flows. Global P accumulation in agricultural soil increased from 2002 to 2010 despite decreases in 2008 and 2009, and the P accumulation occurred primarily in cropland. Despite the global increase in soil P, 32 % of the world's cropland and 43 % of the pasture had soil P deficits. Increasing soil P deficits were found for African cropland vs. increasing P accumulation in eastern Asia. European and North American pasture had a soil P deficit because the continuous removal of biomass P by grazing exceeded P inputs. International trade played a significant role in P redistribution among countries through the flows of P in fertilizer and food among countries. Based on country-scale budgets and trends we propose policy options to potentially mitigate regional P imbalances in agricultural soils, particularly by optimizing the use of phosphate fertilizer and the recycling of waste P. The trend of the increasing consumption of livestock products will require more P inputs to the agricultural system, implying a low P-use efficiency and aggravating P-stock scarcity in the future. The global and regional phosphorus budgets and their PUEs in agricultural systems are publicly available at https://doi.pangaea.de/10.1594/PANGAEA.875296.

  12. Structural tailoring of advanced turboprops (STAT): User's manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1991-01-01

    This user's manual describes the Structural Tailoring of Advanced Turboprops program. It contains instructions to prepare the input for optimization, blade geometry and analysis, geometry generation, and finite element program control. In addition, a sample input file is provided as well as a section describing special applications (i.e., non-standard input).

  13. Intensification of ion exchange desorption of thiamine diphosphate by low-powered ultrasound.

    PubMed

    Pinchukova, Natalia A; Voloshko, Alexander Y; Merko, Maria A; Bondarenko, Yana A; Chebanov, Valentin A

    2018-03-01

    The process of ultrasound-assisted ion-exchange desorption of cocarboxylase (thiamine diphosphate (TDP)) from a strong acidic cation resin was studied. Kinetics studies revealed that ultrasound accelerates TDP desorption by 3 times. The optimal desorption parameters, viz. US power input, sonication time, eluent/resin ratio and the eluent (ammonium acetate buffer) concentration were established which were 15mW/cm 3 , 20min, 1:1 and 1M, respectively. The resin stability studies showed that the optimal ultrasonic power was less by the order than the resin degradation threshold which ensures durable and efficient resin exploitation during production. The resin sorption capacity remained unchanged even after 20 cycles of TDP sorption, ultrasonic desorption and resin regeneration. The recovery ratio of TDP was shown to increase non-linearly with decreasing the resin saturation factor, which can be attributed to diffusion limitations occurring during desorption. The optimal resin loading corresponding to more than 90 per cent of TDP recovery was found to be at the level of 10 per cent of the maximal sorption capacity. The study revealed 4-5-fold increase in concentrations of the recovered solutions, which together with process times shortening should result in considerable energy saving in downstream operations on production scale. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Microcomputer Simulation of a Fourier Approach to Optical Wave Propagation

    DTIC Science & Technology

    1992-06-01

    and transformed input in transform domain). 44 Figure 21. SHFTOUTPUT1 ( inverse transform of product of Bessel filter and transformed input). . . . 44...Figure 22. SHFT OUTPUT2 ( inverse transform of product of ,derivative filter and transformed input).. 45 Figure 23. •tIFT OUTPUT (sum of SHFTOUTPUT1...52 Figure 33. SHFT OUTPUT1 at time slice 1 ( inverse transform of product of Bessel filter and transformed input) .... ............. ... 53

  15. Optimal modified tracking performance for MIMO networked control systems with communication constraints.

    PubMed

    Wu, Jie; Zhou, Zhu-Jun; Zhan, Xi-Sheng; Yan, Huai-Cheng; Ge, Ming-Feng

    2017-05-01

    This paper investigates the optimal modified tracking performance of multi-input multi-output (MIMO) networked control systems (NCSs) with packet dropouts and bandwidth constraints. Some explicit expressions are obtained by using co-prime factorization and the spectral decomposition technique. The obtained results show that the optimal modified tracking performance is related to the intrinsic properties of a given plant such as non-minimum phase (NMP) zeros, unstable poles, and their directions. Furthermore, the modified factor, packet dropouts probability and bandwidth also impact the optimal modified tracking performance of the NCSs. The optimal modified tracking performance with channel input power constraint is obtained by searching through all stabilizing two-parameter compensator. Finally, some typical examples are given to illustrate the effectiveness of the theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. A new approach of optimal control for a class of continuous-time chaotic systems by an online ADP algorithm

    NASA Astrophysics Data System (ADS)

    Song, Rui-Zhuo; Xiao, Wen-Dong; Wei, Qing-Lai

    2014-05-01

    We develop an online adaptive dynamic programming (ADP) based optimal control scheme for continuous-time chaotic systems. The idea is to use the ADP algorithm to obtain the optimal control input that makes the performance index function reach an optimum. The expression of the performance index function for the chaotic system is first presented. The online ADP algorithm is presented to achieve optimal control. In the ADP structure, neural networks are used to construct a critic network and an action network, which can obtain an approximate performance index function and the control input, respectively. It is proven that the critic parameter error dynamics and the closed-loop chaotic systems are uniformly ultimately bounded exponentially. Our simulation results illustrate the performance of the established optimal control method.

  17. The Efficiency and Budgeting of Public Hospitals: Case Study of Iran

    PubMed Central

    Yusefzadeh, Hasan; Ghaderi, Hossein; Bagherzade, Rafat; Barouni, Mohsen

    2013-01-01

    Background Hospitals are the most costly and important components of any health care system, so it is important to know their economic values, pay attention to their efficiency and consider factors affecting them. Objective The aim of this study was to assess the technical scale and economic efficiency of hospitals in the West Azerbaijan province of Iran, for which Data Envelopment Analysis (DEA) was used to propose a model for operational budgeting. Materials and Methods This study was a descriptive-analysis that was conducted in 2009 and had three inputs and two outputs. Deap2, 1 software was used for data analysis. Slack and radial movements and surplus of inputs were calculated for selected hospitals. Finally, a model was proposed for performance-based budgeting of hospitals and health sectors using the DEA technique. Results The average scores of technical efficiency, pure technical efficiency (managerial efficiency) and scale efficiency of hospitals were 0.584, 0.782 and 0.771, respectively. In other words the capacity of efficiency promotion in hospitals without any increase in costs and with the same amount of inputs was about 41.5%. Only four hospitals among all hospitals had the maximum level of technical efficiency. Moreover, surplus production factors were evident in these hospitals. Conclusions Reduction of surplus production factors through comprehensive planning based on the results of the Data Envelopment Analysis can play a major role in cost reduction of hospitals and health sectors. In hospitals with a technical efficiency score of less than one, the original and projected values of inputs were different; resulting in a surplus. Hence, these hospitals should reduce their values of inputs to achieve maximum efficiency and optimal performance. The results of this method was applied to hospitals a benchmark for making decisions about resource allocation; linking budgets to performance results; and controlling and improving hospitals performance. PMID:24349726

  18. Neural Network Optimization of Ligament Stiffnesses for the Enhanced Predictive Ability of a Patient-Specific, Computational Foot/Ankle Model.

    PubMed

    Chande, Ruchi D; Wayne, Jennifer S

    2017-09-01

    Computational models of diarthrodial joints serve to inform the biomechanical function of these structures, and as such, must be supplied appropriate inputs for performance that is representative of actual joint function. Inputs for these models are sourced from both imaging modalities as well as literature. The latter is often the source of mechanical properties for soft tissues, like ligament stiffnesses; however, such data are not always available for all the soft tissues nor is it known for patient-specific work. In the current research, a method to improve the ligament stiffness definition for a computational foot/ankle model was sought with the greater goal of improving the predictive ability of the computational model. Specifically, the stiffness values were optimized using artificial neural networks (ANNs); both feedforward and radial basis function networks (RBFNs) were considered. Optimal networks of each type were determined and subsequently used to predict stiffnesses for the foot/ankle model. Ultimately, the predicted stiffnesses were considered reasonable and resulted in enhanced performance of the computational model, suggesting that artificial neural networks can be used to optimize stiffness inputs.

  19. Multi-response optimization of process parameters for GTAW process in dissimilar welding of Incoloy 800HT and P91 steel by using grey relational analysis

    NASA Astrophysics Data System (ADS)

    vellaichamy, Lakshmanan; Paulraj, Sathiya

    2018-02-01

    The dissimilar welding of Incoloy 800HT and P91 steel using Gas Tungsten arc welding process (GTAW) This material is being used in the Nuclear Power Plant and Aerospace Industry based application because Incoloy 800HT possess good corrosion and oxidation resistance and P91 possess high temperature strength and creep resistance. This work discusses on multi-objective optimization using gray relational analysis (GRA) using 9CrMoV-N filler materials. The experiment conducted L9 orthogonal array. The input parameter are current, voltage, speed. The output response are Tensile strength, Hardness and Toughness. To optimize the input parameter and multiple output variable by using GRA. The optimal parameter is combination was determined as A2B1C1 so given input parameter welding current at 120 A, voltage at 16 V and welding speed at 0.94 mm/s. The output of the mechanical properties for best and least grey relational grade was validated by the metallurgical characteristics.

  20. SEAPODYM-LTL: a parsimonious zooplankton dynamic biomass model

    NASA Astrophysics Data System (ADS)

    Conchon, Anna; Lehodey, Patrick; Gehlen, Marion; Titaud, Olivier; Senina, Inna; Séférian, Roland

    2017-04-01

    Mesozooplankton organisms are of critical importance for the understanding of early life history of most fish stocks, as well as the nutrient cycles in the ocean. Ongoing climate change and the need for improved approaches to the management of living marine resources has driven recent advances in zooplankton modelling. The classical modeling approach tends to describe the whole biogeochemical and plankton cycle with increasing complexity. We propose here a different and parsimonious zooplankton dynamic biomass model (SEAPODYM-LTL) that is cost efficient and can be advantageously coupled with primary production estimated either from satellite derived ocean color data or biogeochemical models. In addition, the adjoint code of the model is developed allowing a robust optimization approach for estimating the few parameters of the model. In this study, we run the first optimization experiments using a global database of climatological zooplankton biomass data and we make a comparative analysis to assess the importance of resolution and primary production inputs on model fit to observations. We also compare SEAPODYM-LTL outputs to those produced by a more complex biogeochemical model (PISCES) but sharing the same physical forcings.

  1. Anthropogenic phosphorus (P) inputs to a river basin and their impacts on P fluxes along its upstream-downstream continuum

    NASA Astrophysics Data System (ADS)

    Zhang, Wangshou; Swaney, Dennis; Hong, Bongghi; Howarth, Robert

    2017-04-01

    Phosphorus (P) originating from anthropogenic sources as a pollutant of surface waters has been an environmental issue for decades because of the well-known role of P in eutrophication. Human activities, such as food production and rapid urbanization, have been linked to increased P inputs which are often accompanied by corresponding increases in riverine P export. However, uneven distributions of anthropogenic P inputs along watersheds from the headwaters to downstream reaches can result in significantly different contributions to the riverine P fluxes of a receiving water body. So far, there is still very little scientific understanding of anthropogenic P inputs and their impacts on riverine flux in river reaches along the upstream to downstream continuum. Here, we investigated P budgets in a series of nested watersheds draining into Hongze Lake of China, and developed a simple empirical function to describe the relationship between anthropogenic inputs and riverine TP fluxes. The results indicated that an average of 1.1% of anthropogenic P inputs are exported into rivers, with most of the remainder retained in the watershed landscape over the period studied. Fertilizer application was the main contributor of P loading to the lake (55% of total loads), followed by legacy P stock (30%), food and feed P inputs (12%) and non-food P inputs (4%). From 60% to 89% of the riverine TP loads generated from various locations within this basin were ultimately transported into the receiving lake of the downstream, with an average rate of 1.86 tons P km-1 retaining in the main stem of the inflowing river annually. Our results highlight that in-stream processes can significantly buffer the riverine P loading to the downstream receiving lake. An integrated P management strategy considering the influence of anthropogenic inputs and hydrological interactions is required to assess and optimize P management for protecting fresh waters.

  2. 19 CFR 351.523 - Upstream subsidies.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the input product and the producer of the subject merchandise are affiliated; (B) The price for the subsidized input product is lower than the price that the producer of the subject merchandise otherwise would... government sets the price of the input product so as to guarantee that the benefit provided with respect to...

  3. Design of a 0.13 µm SiGe Limiting Amplifier with 14.6 THz Gain-Bandwidth-Product

    NASA Astrophysics Data System (ADS)

    Park, Sehoon; Du, Xuan-Quang; Grözing, Markus; Berroth, Manfred

    2017-09-01

    This paper presents the design of a limiting amplifier with 1-to-3 fan-out implementation in a 0.13 µm SiGe BiCMOS technology and gives a detailed guideline to determine the circuit parameters of the amplifier for optimum high-frequency performance based on simplified gain estimations. The proposed design uses a Cherry-Hooper topology for bandwidth enhancement and is optimized for maximum group delay flatness to minimize phase distortion of the input signal. With regard to a high integration density and a small chip area, the design employs no passive inductors which might be used to boost the circuit bandwidth with inductive peaking. On a RLC-extracted post-layout simulation level, the limiting amplifier exhibits a gain-bandwidth-product of 14.6 THz with 56.6 dB voltage gain and 21.5 GHz 3 dB bandwidth at a peak-to-peak input voltage of 1.5 mV. The group delay variation within the 3 dB bandwidth is less than 0.5 ps and the power dissipation at a power supply voltage of 3 V including output drivers is 837 mW.

  4. Hybrid life-cycle environmental and cost inventory of sewage sludge treatment and end-use scenarios: a case study from China.

    PubMed

    Murray, Ashley; Horvath, Arpad; Nelson, Kara L

    2008-05-01

    Sewage sludge management poses environmental, economic, and political challenges for wastewater treatment plants and municipalities around the globe. To facilitate more informed and sustainable decision making, this study used life-cycle inventory (LCI) to expand upon previous process-based LCIs of sewage sludge treatmenttechnologies. Additionally, the study evaluated an array of productive end-use options for treated sewage sludge, such as fertilizer and as an input into construction materials, to determine how the sustainability of traditional manufacturing processes changes with sludge as a replacement for other raw inputs. The inclusion of the life-cycle of necessary inputs (such as lime) used in sludge treatment significantly impacts the sustainability profiles of different treatment and end-use schemes. Overall, anaerobic digestion is generally the optimal treatment technology whereas incineration, particularly if coal-fired, is the most environmentally and economically costly. With respect to sludge end use, offsets are greatest for the use of sludge as fertilizer, but all of the productive uses of sludge can improve the sustainability of conventional manufacturing practices. The results are intended to help inform and guide decisions about sludge handling for existing wastewater treatment plants and those that are still in the planning phase in cities around the world. Although additional factors must be considered when selecting a sludge treatment and end-use scheme, this study highlights how a systems approach to planning can contribute significantly to improving overall environmental sustainability.

  5. Workflow Optimization for Tuning Prostheses with High Input Channel

    DTIC Science & Technology

    2017-10-01

    of Specific Aim 1 by driving a commercially available two DoF wrist and single DoF hand. The high -level control system will provide analog signals...AWARD NUMBER: W81XWH-16-1-0767 TITLE: Workflow Optimization for Tuning Prostheses with High Input Channel PRINCIPAL INVESTIGATOR: Daniel Merrill...Unlimited The views, opinions and/or findings contained in this report are those of the author(s) and should not be construed as an official Department

  6. A risk analysis approach applied to field surveillance in utility meters in legal metrology

    NASA Astrophysics Data System (ADS)

    Rodrigues Filho, B. A.; Nonato, N. S.; Carvalho, A. D.

    2018-03-01

    Field surveillance represents the level of control in metrological supervision responsible for checking the conformity of measuring instruments in-service. Utility meters represent the majority of measuring instruments produced by notified bodies due to self-verification in Brazil. They play a major role in the economy once electricity, gas and water are the main inputs to industries in their production processes. Then, to optimize the resources allocated to control these devices, the present study applied a risk analysis in order to identify among the 11 manufacturers notified to self-verification, the instruments that demand field surveillance.

  7. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  8. Uncertainty in BMP evaluation and optimization for watershed management

    NASA Astrophysics Data System (ADS)

    Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.

    2012-12-01

    Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.

  9. Variational theory of the tapered impedance transformer

    NASA Astrophysics Data System (ADS)

    Erickson, Robert P.

    2018-02-01

    Superconducting amplifiers are key components of modern quantum information circuits. To minimize information loss and reduce oscillations, a tapered impedance transformer of new design is needed at the input/output for compliance with other 50 Ω components. We show that an optimal tapered transformer of length ℓ, joining the amplifier to the input line, can be constructed using a variational principle applied to the linearized Riccati equation describing the voltage reflection coefficient of the taper. For an incident signal of frequency ωo, the variational solution results in an infinite set of equivalent optimal transformers, each with the same form for the reflection coefficient, each able to eliminate input-line reflections. For the special case of optimal lossless transformers, the group velocity vg is shown to be constant, with characteristic impedance dependent on frequency ωc = πvg/ℓ. While these solutions inhibit input-line reflections only for frequency ωo, a subset of optimal lossless transformers with ωo significantly detuned from ωc does exhibit a wide bandpass. Specifically, by choosing ωo → 0 (ωo → ∞), we obtain a subset of optimal low-pass (high-pass) lossless tapers with bandwidth (0, ˜ ωc) [(˜ωc, ∞)]. From the subset of solutions, we derive both the wide-band low-pass and high-pass transformers, and we discuss the extent to which they can be realized given fabrication constraints. In particular, we demonstrate the superior reflection response of our high-pass transformer when compared to other taper designs. Our results have application to amplifiers, transceivers, and other components sensitive to impedance mismatch.

  10. A policy iteration approach to online optimal control of continuous-time constrained-input systems.

    PubMed

    Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L

    2013-09-01

    This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. Copyright © 2013 ISA. All rights reserved.

  11. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  12. Subsidy or subtraction: how do terrestrial inputs influence consumer production in lakes?

    USGS Publications Warehouse

    Jones, Stuart E.; Solomon, Christopher T.; Weidel, Brian C.

    2012-01-01

    Cross-ecosystem fluxes are ubiquitous in food webs and are generally thought of as subsidies to consumer populations. Yet external or allochthonous inputs may in fact have complex and habitat-specific effects on recipient ecosystems. In lakes, terrestrial inputs of organic carbon contribute to basal resource availability, but can also reduce resource availability via shading effects on phytoplankton and periphyton. Terrestrial inputs might therefore either subsidise or subtract from consumer production. We developed and parameterised a simple model to explore this idea. The model estimates basal resource supply and consumer production given lake-level characteristics including total phosphorus (TP) and dissolved organic carbon (DOC) concentration, and consumer-level characteristics including resource preferences and growth efficiencies. Terrestrial inputs diminished primary production and total basal resource supply at the whole-lake level, except in ultra-oligotrophic systems. However, this system-level generalisation masked complex habitat-specific effects. In the pelagic zone, dissolved and particulate terrestrial carbon inputs were available to zooplankton via several food web pathways. Consequently, zooplankton production usually increased with terrestrial inputs, even as total whole-lake resource availability decreased. In contrast, in the benthic zone the dominant, dissolved portion of the terrestrial carbon load had predominantly negative effects on resource availability via shading of periphyton. Consequently, terrestrial inputs always decreased zoobenthic production except under extreme and unrealistic parameterisations of the model. Appreciating the complex and habitat-specific effects of allochthonous inputs may be essential for resolving the effects of cross-habitat fluxes on consumers in lakes and other food webs.

  13. Comparison greenhouse gas (GHG) emissions and global warming potential (GWP) effect of energy use in different wheat agroecosystems in Iran.

    PubMed

    Yousefi, Mohammad; Mahdavi Damghani, Abdolmajid; Khoramivafa, Mahmud

    2016-04-01

    The aims of this study were to determine energy requirement and global warming potential (GWP) in low and high input wheat production systems in western of Iran. For this purpose, data were collected from 120 wheat farms applying questionnaires via face-to-face interviews. Results showed that total energy input and output were 60,000 and 180,000 MJ ha(-1) in high input systems and 14,000 and 56,000 MJ ha(-1) in low input wheat production systems, respectively. The highest share of total input energy in high input systems recorded for electricity power, N fertilizer, and diesel fuel with 36, 18, and 13 %, respectively, while the highest share of input energy in low input systems observed for N fertilizer, diesel fuel, and seed with 32, 31, and 27 %. Energy use efficiency in high input systems (3.03) was lower than of low input systems (3.94). Total CO2, N2O, and CH4 emissions in high input systems were 1981.25, 31.18, and 1.87 kg ha(-1), respectively. These amounts were 699.88, 0.02, and 0.96 kg ha(-1) in low input systems. In high input wheat production systems, total GWP was 11686.63 kg CO2eq ha(-1) wheat. This amount was 725.89 kg CO2eq ha(-1) in low input systems. The results show that 1 ha of high input system will produce greenhouse effect 17 times of low input systems. So, high input production systems need to have an efficient and sustainable management for reducing environmental crises such as change climate.

  14. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  15. SU-F-J-105: Towards a Novel Treatment Planning Pipeline Delivering Pareto- Optimal Plans While Enabling Inter- and Intrafraction Plan Adaptation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontaxis, C; Bol, G; Lagendijk, J

    2016-06-15

    Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certainmore » percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan adaptation. This research is financially supported by Elekta AB, Stockholm, Sweden.« less

  16. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    PubMed

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  17. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  18. Computer simulation and design of a three degree-of-freedom shoulder module

    NASA Technical Reports Server (NTRS)

    Marco, David; Torfason, L.; Tesar, Delbert

    1989-01-01

    An in-depth kinematic analysis of a three degree of freedom fully-parallel robotic shoulder module is presented. The major goal of the analysis is to determine appropriate link dimensions which will provide a maximized workspace along with desirable input to output velocity and torque amplification. First order kinematic influence coefficients which describe the output velocity properties in terms of actuator motions provide a means to determine suitable geometric dimensions for the device. Through the use of computer simulation, optimal or near optimal link dimensions based on predetermined design criteria are provided for two different structural designs of the mechanism. The first uses three rotational inputs to control the output motion. The second design involves the use of four inputs, actuating any three inputs for a given position of the output link. Alternative actuator placements are examined to determine the most effective approach to control the output motion.

  19. Noniterative computation of infimum in H(infinity) optimisation for plants with invariant zeros on the j(omega)-axis

    NASA Technical Reports Server (NTRS)

    Chen, B. M.; Saber, A.

    1993-01-01

    A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.

  20. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  1. Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2012-01-01

    This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.

  2. Econutrition and utilization of food-based approaches for nutritional health.

    PubMed

    Blasbalg, Tanya L; Wispelwey, Bram; Deckelbaum, Richard J

    2011-03-01

    Macronutrient and micronutrient deficiencies continue to have a detrimental impact in lower-income countries, with significant costs in morbidity, mortality, and productivity. Food is the primary source of the nutrients needed to sustain life, and it is the essential component that links nutrition, agriculture, and ecology in the econutrition framework. To present evidence and analysis of food-based approaches for improving nutritional and health outcomes in lower-income countries. Review of existing literature. The benefits of food-based approaches may include nutritional improvement, food security, cost-effectiveness, sustainability, and human productivity. Food-based approaches require additional inputs, including nutrition education, gender considerations, and agricultural planning. Although some forms of malnutrition can be addressed via supplements, food-based approaches are optimal to achieve sustainable solutions to multiple nutrient deficiencies.

  3. Improved Light Utilization in Camelina: Center for Enhanced Camelina Oil (CECO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-01-01

    PETRO Project: The Danforth Center will optimize light utilization in Camelina, a drought-resistant, cold-tolerant oilseed crop. The team is modifying how Camelina collects sunlight, engineering its topmost leaves to be lighter in color so sunlight can more easily reflect onto lower parts of the plant. A more uniform distribution of light would improve the efficiency of photosynthesis. Combined with other strategies to produce more oil in the seed, Camelina would yield more oil per plant. The team is also working to allow Camelina to absorb carbon dioxide (CO2) more efficiently, providing more carbon input for oil production. The goal ismore » to improve light utilization and oil production to the point where Camelina produces enough fuel precursors per acre to compete with other fuels.« less

  4. Gaalas/Gaas Solar Cell Process Study

    NASA Technical Reports Server (NTRS)

    Almgren, D. W.; Csigi, K. I.

    1980-01-01

    Available information on liquid phase, vapor phase (including chemical vapor deposition) and molecular beam epitaxy growth procedures that could be used to fabricate single crystal, heteroface, (AlGa) As/GaAs solar cells, for space applications is summarized. A comparison of the basic cost elements of the epitaxy growth processes shows that the current infinite melt LPE process has the lower cost per cell for an annual production rate of 10,000 cells. The metal organic chemical vapor deposition (MO-CVD) process has the potential for low cost production of solar cells but there is currently a significant uncertainty in process yield, i.e., the fraction of active material in the input gas stream that ends up in the cell. Additional work is needed to optimize and document the process parameters for the MO-CVD process.

  5. Technical efficiency of teaching hospitals in Iran: the use of Stochastic Frontier Analysis, 1999–2011

    PubMed Central

    Goudarzi, Reza; Pourreza, Abolghasem; Shokoohi, Mostafa; Askari, Roohollah; Mahdavi, Mahdi; Moghri, Javad

    2014-01-01

    Background: Hospitals are highly resource-dependent settings, which spend a large proportion of healthcare financial resources. The analysis of hospital efficiency can provide insight into how scarce resources are used to create health values. This study examines the Technical Efficiency (TE) of 12 teaching hospitals affiliated with Tehran University of Medical Sciences (TUMS) between 1999 and 2011. Methods: The Stochastic Frontier Analysis (SFA) method was applied to estimate the efficiency of TUMS hospitals. A best function, referred to as output and input parameters, was calculated for the hospitals. Number of medical doctors, nurses, and other personnel, active beds, and outpatient admissions were considered as the input variables and number of inpatient admissions as an output variable. Results: The mean level of TE was 59% (ranging from 22 to 81%). During the study period the efficiency increased from 61 to 71%. Outpatient admission, other personnel and medical doctors significantly and positively affected the production (P< 0.05). Concerning the Constant Return to Scale (CRS), an optimal production scale was found, implying that the productions of the hospitals were approximately constant. Conclusion: Findings of this study show a remarkable waste of resources in the TUMS hospital during the decade considered. This warrants policy-makers and top management in TUMS to consider steps to improve the financial management of the university hospitals. PMID:25114947

  6. Optimization of the transmission of observable expectation values and observable statistics in continuous-variable teleportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albano Farias, L.; Stephany, J.

    2010-12-15

    We analyze the statistics of observables in continuous-variable (CV) quantum teleportation in the formalism of the characteristic function. We derive expressions for average values of output-state observables, in particular, cumulants which are additive in terms of the input state and the resource of teleportation. Working with a general class of teleportation resources, the squeezed-bell-like states, which may be optimized in a free parameter for better teleportation performance, we discuss the relation between resources optimal for fidelity and those optimal for different observable averages. We obtain the values of the free parameter of the squeezed-bell-like states which optimize the central momentamore » and cumulants up to fourth order. For the cumulants the distortion between in and out states due to teleportation depends only on the resource. We obtain optimal parameters {Delta}{sub (2)}{sup opt} and {Delta}{sub (4)}{sup opt} for the second- and fourth-order cumulants, which do not depend on the squeezing of the resource. The second-order central momenta, which are equal to the second-order cumulants, and the photon number average are also optimized by the resource with {Delta}{sub (2)}{sup opt}. We show that the optimal fidelity resource, which has been found previously to depend on the characteristics of input, approaches for high squeezing to the resource that optimizes the second-order momenta. A similar behavior is obtained for the resource that optimizes the photon statistics, which is treated here using the sum of the squared differences in photon probabilities of input versus output states as the distortion measure. This is interpreted naturally to mean that the distortions associated with second-order momenta dominate the behavior of the output state for large squeezing of the resource. Optimal fidelity resources and optimal photon statistics resources are compared, and it is shown that for mixtures of Fock states both resources are equivalent.« less

  7. Using simulation and data envelopment analysis to evaluate Iraqi regions in producing strategic crops

    NASA Astrophysics Data System (ADS)

    Chaloob, Ibrahim Z.; Ramli, Razamin; Nawawi, Mohd Kamal Mohd

    2014-12-01

    Productivity of the agriculture sector in Iraq has yet to reach an acceptable level. In this paper, we introduce a practical method to help manage Iraqi agriculture sector to control resources and increase production to meet the modern century requirements of good crops. These important resources are identified as water, fertilizer, natural fertilizer, pesticides and labour. The current agricultural patterns in Iraq affect the strategic crops cultivation in the country and lessen agricultural production to some life-threatening limits. Data Envelopment Analysis (DEA), which is a non-parametric tool, is proposed to identify solutions that can maximize farmers' net benefit making an optimal use of the five resources. This model also improves optimal mix of the resources. In reference to the production of each one of the three strategic crops in Iraq, the DEA model is used to find the efficiency of one region among four others in its agriculture sector, with the main problem being the constraint in the number of lands available in the situation. Hence, the simulation technique is used to generate additional regions to the four main regions adopted. This is to resolve the constriction of DEA when the decision making unit is less than the number of variables (outputs and inputs). The result is expected to show the efficiency of each of the evaluated region.

  8. Fume generation and content of total chromium and hexavalent chromium in flux-cored arc welding.

    PubMed

    Yoon, Chung Sik; Paik, Nam Won; Kim, Jeong Han

    2003-11-01

    This study was performed to investigate the fume generation rates (FGRs) and the concentrations of total chromium and hexavalent chromium when stainless steel was welded using flux-cored arc welding (FCAW) with CO2 gas. FGRs and concentrations of total chromium and hexavalent chromium were quantified using a method recommended by the American Welding Society, inductively coupled plasma-atomic emission spectroscopy (NIOSH Method 7300) and ion chromatography (modified NIOSH Method 7604), respectively. The amount of total fume generated was significantly related to the level of input power. The ranges of FGR were 189-344, 389-698 and 682-1157 mg/min at low, optimal and high input power, respectively. It was found that the FGRs increased with input power by an exponent of 1.19, and increased with current by an exponent of 1.75. The ranges of total chromium fume generation rate (FGRCr) were 3.83-8.27, 12.75-37.25 and 38.79-76.46 mg/min at low, optimal and high input power, respectively. The ranges of hexavalent chromium fume generation rate (FGRCr6+) were 0.46-2.89, 0.76-6.28 and 1.70-11.21 mg/min at low, optimal and high input power, respectively. Thus, hexavalent chromium, which is known to be a carcinogen, generated 1.9 (1.0-2.7) times and 3.7 (2.4-5.0) times as the input power increased from low to optimal and low to high, respectively. As a function of input power, the concentration of total chromium in the fume increased from 1.57-2.65 to 5.45-8.13% while the concentration of hexavalent chromium ranged from 0.15 to 1.08%. The soluble fraction of hexavalent chromium produced by FCAW was approximately 80-90% of total hexavalent chromium. The concentration of total chromium and the solubility of hexavalent chromium were similar to those reported from other studies of shielded metal arc welding fumes, and the concentration of hexavalent chromium was similar to that obtained for metal inert gas-welding fumes.

  9. A techno-economic model for optimum regeneration of surface mined land

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Manas K.; Sinha, Indra N.

    2006-07-01

    The recent global scenario in the mineral sector may be characterized by rising competitiveness, increasing production costs and a slump in market price. This has pushed the mineral sector in general and that in the developing countries in particular to a situation where the industry has a limited capacity to sustain unproductive costs. This, more often than not, results in a situation where the industry fails to ensure environmental safeguards during and after mineral extraction. The situation is conspicuous in the Indian coal mining industry where more than 73% production comes from surface operations. India has an ambitious power augmentation projection for the coming 10 years. A phenomenal increase in coal production is proposed from the power grade coalfields in India. One of the most likely fall-outs of land degradation due to mining in these areas would be significant reduction of agricultural and other important land-uses. Currently, backfilling costs are perceived as prohibitive and abandonment of land is the easy way out. This study attempts to provide mine planners with a mathematical model that distributes generated overburden at defined disposal options while ensuring maximization of backfilled land area at minimum direct and economic costs. Optimization has been accomplished by linear programming (LP) for optimum distribution of each year’s generated overburden. Previous year’s disposal quantity outputs are processed as one set of the inputs to the LP model for generation of current year’s disposal output. From various geo-mining inputs, site constants of the LP constraints are calculated. Arrived value of economic vectors, which guide the programming statement, decides the optimal overburden distribution in defined options. The case example (with model test run) indicates that overburden distribution is significantly sensitive to coal seam gradient. The model has universal applicability to cyclic system (shovel dumper combination) of opencast mining of stratified deposits.

  10. Agro-hydrology and multi-temporal high-resolution remote sensing: toward an explicit spatial processes calibration

    NASA Astrophysics Data System (ADS)

    Ferrant, S.; Gascoin, S.; Veloso, A.; Salmon-Monviola, J.; Claverie, M.; Rivalland, V.; Dedieu, G.; Demarez, V.; Ceschia, E.; Probst, J.-L.; Durand, P.; Bustillo, V.

    2014-12-01

    The growing availability of high-resolution satellite image series offers new opportunities in agro-hydrological research and modeling. We investigated the possibilities offered for improving crop-growth dynamic simulation with the distributed agro-hydrological model: topography-based nitrogen transfer and transformation (TNT2). We used a leaf area index (LAI) map series derived from 105 Formosat-2 (F2) images covering the period 2006-2010. The TNT2 model (Beaujouan et al., 2002), calibrated against discharge and in-stream nitrate fluxes for the period 1985-2001, was tested on the 2005-2010 data set (climate, land use, agricultural practices, and discharge and nitrate fluxes at the outlet). Data from the first year (2005) were used to initialize the hydrological model. A priori agricultural practices obtained from an extensive field survey, such as seeding date, crop cultivar, and amount of fertilizer, were used as input variables. Continuous values of LAI as a function of cumulative daily temperature were obtained at the crop-field level by fitting a double logistic equation against discrete satellite-derived LAI. Model predictions of LAI dynamics using the a priori input parameters displayed temporal shifts from those observed LAI profiles that are irregularly distributed in space (between field crops) and time (between years). By resetting the seeding date at the crop-field level, we have developed an optimization method designed to efficiently minimize this temporal shift and better fit the crop growth against both the spatial observations and crop production. This optimization of simulated LAI has a negligible impact on water budgets at the catchment scale (1 mm yr-1 on average) but a noticeable impact on in-stream nitrogen fluxes (around 12%), which is of interest when considering nitrate stream contamination issues and the objectives of TNT2 modeling. This study demonstrates the potential contribution of the forthcoming high spatial and temporal resolution products from the Sentinel-2 satellite mission for improving agro-hydrological modeling by constraining the spatial representation of crop productivity.

  11. Agro-hydrology and multi temporal high resolution remote sensing: toward an explicit spatial processes calibration

    NASA Astrophysics Data System (ADS)

    Ferrant, S.; Gascoin, S.; Veloso, A.; Salmon-Monviola, J.; Claverie, M.; Rivalland, V.; Dedieu, G.; Demarez, V.; Ceschia, E.; Probst, J.-L.; Durand, P.; Bustillo, V.

    2014-07-01

    The recent and forthcoming availability of high resolution satellite image series offers new opportunities in agro-hydrological research and modeling. We investigated the perspective offered by improving the crop growth dynamic simulation using the distributed agro-hydrological model, Topography based Nitrogen transfer and Transformation (TNT2), using LAI map series derived from 105 Formosat-2 (F2) images during the period 2006-2010. The TNT2 model (Beaujouan et al., 2002), calibrated with discharge and in-stream nitrate fluxes for the period 1985-2001, was tested on the 2006-2010 dataset (climate, land use, agricultural practices, discharge and nitrate fluxes at the outlet). A priori agricultural practices obtained from an extensive field survey such as seeding date, crop cultivar, and fertilizer amount were used as input variables. Continuous values of LAI as a function of cumulative daily temperature were obtained at the crop field level by fitting a double logistic equation against discrete satellite-derived LAI. Model predictions of LAI dynamics with a priori input parameters showed an temporal shift with observed LAI profiles irregularly distributed in space (between field crops) and time (between years). By re-setting seeding date at the crop field level, we proposed an optimization method to minimize efficiently this temporal shift and better fit the crop growth against the spatial observations as well as crop production. This optimization of simulated LAI has a negligible impact on water budget at the catchment scale (1 mm yr-1 in average) but a noticeable impact on in-stream nitrogen fluxes (around 12%) which is of interest considering nitrate stream contamination issues and TNT2 model objectives. This study demonstrates the contribution of forthcoming high spatial and temporal resolution products of Sentinel-2 satellite mission in improving agro-hydrological modeling by constraining the spatial representation of crop productivity.

  12. EMODnet MedSea Checkpoint for sustainable Blue Growth

    NASA Astrophysics Data System (ADS)

    Moussat, Eric; Pinardi, Nadia; Manzella, Giuseppe; Blanc, Frederique

    2016-04-01

    The EMODNET checkpoint is a wide monitoring system assessment activity aiming to support the sustainable Blue Growth at the scale of the European Sea Basins by: 1) Clarifying the observation landscape of all compartments of the marine environment including Air, Water, Seabed, Biota and Human activities, pointing out to the existing programs, national, European and international 2) Evaluating fitness for use indicators that will show the accessibility and usability of observation and modeling data sets and their roles and synergies based upon selected applications by the European Marine Environment Strategy 3) Prioritizing the needs to optimize the overall monitoring Infrastructure (in situ and satellite data collection and assembling, data management and networking, modeling and forecasting, geo-infrastructure) and release recommendations for evolutions to better meet the application requirements in view of sustainable Blue Growth The assessment is designed for : - Institutional stakeholders for decision making on observation and monitoring systems - Data providers and producers to know how their data collected once for a given purpose could fit other user needs - End-users interested in a regional status and possible uses of existing monitoring data Selected end-user applications are of paramount importance for: (i) the blue economy sector (offshore industries, fisheries); (ii) marine environment variability and change (eutrophication, river inputs and ocean climate change impacts); (iii) emergency management (oil spills); and (iv) preservation of natural resources and biodiversity (Marine Protected Areas). End-user applications generate innovative products based on the existing observation landscape. The fitness for use assessment is made thanks to the comparison of the expected product specifications with the quality of the product derived from the selected data. This involves the development of checkpoint information and indicators based on Data quality and Metadata standards for geographic information (ISO 19157 and ISO 19115 respectively). The fitness for use of the input datasets are assessed using 2 categories of criteria to determine how these datasets fits the user requirements which drive them to select a data source rather than another one and to show performance and gaps of the present monitoring systems : • Data appropriateness : what is made available to the user ?. • Data availability : how it is made available to the user? All information are stored in a GIS platform and made available with two types of interfaces: - Front-end interfaces with users, to present the input data used by all challenges, the innovative products generated by challenges and the assessment indicators. - Back-end interfaces to partners, to store the checkpoint descriptors of input data, specification to generate targeted products, catalogue information of products with associated checkpoint indicators linked to the input data The validation of the records is done at three levels, at technical level (GIS), at challenge level (use), and at sea basin level (synthesis of monitoring data adequacy including expert comments) to end with the production of a yearly Data Adequacy Report.

  13. Modeling development of natural multi-sensory integration using neural self-organisation and probabilistic population codes

    NASA Astrophysics Data System (ADS)

    Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan

    2015-10-01

    Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.

  14. Optimal time-domain technique for pulse width modulation in power electronics

    NASA Astrophysics Data System (ADS)

    Mayergoyz, I.; Tyagi, S.

    2018-05-01

    Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.

  15. Integrated modeling approach for optimal management of water, energy and food security nexus

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Vesselinov, Velimir V.

    2017-03-01

    Water, energy and food (WEF) are inextricably interrelated. Effective planning and management of limited WEF resources to meet current and future socioeconomic demands for sustainable development is challenging. WEF production/delivery may also produce environmental impacts; as a result, green-house-gas emission control will impact WEF nexus management as well. Nexus management for WEF security necessitates integrated tools for predictive analysis that are capable of identifying the tradeoffs among various sectors, generating cost-effective planning and management strategies and policies. To address these needs, we have developed an integrated model analysis framework and tool called WEFO. WEFO provides a multi-period socioeconomic model for predicting how to satisfy WEF demands based on model inputs representing productions costs, socioeconomic demands, and environmental controls. WEFO is applied to quantitatively analyze the interrelationships and trade-offs among system components including energy supply, electricity generation, water supply-demand, food production as well as mitigation of environmental impacts. WEFO is demonstrated to solve a hypothetical nexus management problem consistent with real-world management scenarios. Model parameters are analyzed using global sensitivity analysis and their effects on total system cost are quantified. The obtained results demonstrate how these types of analyses can be helpful for decision-makers and stakeholders to make cost-effective decisions for optimal WEF management.

  16. Production of biosolid fuels from municipal sewage sludge: Technical and economic optimisation.

    PubMed

    Wzorek, Małgorzata; Tańczuk, Mariusz

    2015-08-01

    The article presents the technical and economic analysis of the production of fuels from municipal sewage sludge. The analysis involved the production of two types of fuel compositions: sewage sludge with sawdust (PBT fuel) and sewage sludge with meat and bone meal (PBM fuel). The technology of the production line of these sewage fuels was proposed and analysed. The main objective of the study is to find the optimal production capacity. The optimisation analysis was performed for the adopted technical and economic parameters under Polish conditions. The objective function was set as a maximum of the net present value index and the optimisation procedure was carried out for the fuel production line input capacity from 0.5 to 3 t h(-1), using the search step 0.5 t h(-1). On the basis of technical and economic assumptions, economic efficiency indexes of the investment were determined for the case of optimal line productivity. The results of the optimisation analysis show that under appropriate conditions, such as prices of components and prices of produced fuels, the production of fuels from sewage sludge can be profitable. In the case of PBT fuel, calculated economic indexes show the best profitability for the capacity of a plant over 1.5 t h(-1) output, while production of PBM fuel is beneficial for a plant with the maximum of searched capacities: 3.0 t h(-1). Sensitivity analyses carried out during the investigation show that influence of both technical and economic assessments on the location of maximum of objective function (net present value) is significant. © The Author(s) 2015.

  17. The grammatical morpheme deficit in moderate hearing impairment.

    PubMed

    McGuckian, Maria; Henry, Alison

    2007-03-01

    Much remains unknown about grammatical morpheme (GM) acquisition by children with moderate hearing impairment (HI) acquiring spoken English. To investigate how moderate HI impacts on the use of GMs in speech and to provide an explanation for the pattern of findings. Elicited and spontaneous speech data were collected from children with moderate HI (n = 10; mean age = 7;4 years) and a control group of typically developing children (n = 10; mean age = 3;2 years) with equivalent mean length of utterance (MLU). The data were analysed to determine the use of ten GMs of English. Comparisons were made between the groups for rates of correct GM production, for types and rates of GM errors, and for order of GM accuracy. The findings revealed significant differences between the HI group and the control group for correct production of five GMs. The differences were not all in the same direction. The HI group produced possessive -s and plural -s significantly less frequently than the controls (this is not simply explained by the perceptual saliency of -s) and produced progressive -ing, articles and irregular past tense significantly more frequently than the controls. Moreover, the order of GM accuracy for the HI group did not correlate with that observed for the control group. Various factors were analysed in an attempt to explain order of GM accuracy for the HI group (i.e. perceptual saliency, syntactic category, semantics and frequency of GMs in input). Frequency of GMs in input was the most successful explanation for the overall pattern of GM accuracy. Interestingly, the order of GM accuracy for the HI group (acquiring spoken English as a first language) was characteristic of that reported for individuals learning English as a second language. An explanation for the findings is drawn from a factor that connects these different groups of language learners, i.e. limited access to spoken English input. It is argued that, because of hearing factors, the children with HI are below a threshold for intake of spoken language input (a threshold easily reached by the controls). Thus, the children with HI are more input-dependent at the point in development studied and as such are more sensitive to input frequency effects. The findings suggest that optimizing or indeed increasing auditory input of GMs may have a positive impact on GM development for children with moderate HI.

  18. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  19. Handbook of energy utilization in agriculture. [Collection of available data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pimentel, D.

    1980-01-01

    Available data, published and unpublished, on energy use in agriculture and forestry production are presented. The data specifically focus on the energy-input aspects of crop, livestock, and forest production. Energy values for various agricultural inputs are discussed in the following: Energy Inputs for Nitrogen, Phosphorus, and Potash Fertilizers; Energy Used in the US for Agricultural Liming Materials; Assessing the Fossil Energy Costs of Propagating Agricultural Crops; Energy Requirements for Irrigation; Energy Inputs for the Production, Formulation, Packaging, and Transport of Various Pesticides; Energy Requirements for Various Methods of Crop Drying; Energy Used for Transporting Supplies to the Farm; and Unitmore » Energy Cost of Farm Buildings. Energy inputs and outputs for field crop systems are discussed for barley, corn, oats, rice, rye, sorghum, wheat, soybeans, dry beans, snap beans, peas, safflower, sugarcane in Louisiana, sugar beet, alfalfa, hay, and corn silage. Energy inputs for vegetables are discussed for cabbage, Florida celery, lettuce, potato, pickling cucumbers, cantaloupes, watermelon, peppers, and spinach. Energy inputs and outputs for fruits and tree crops discussed are: Eastern US apples, apricots, cherries, peaches, pears, plums and prunes, grapes in the US, US citrus, banana in selected areas, strawberries in the US, red raspberries, blueberries, cranberries, pecans, walnuts, almonds, and maple production in Vermont. Energy inputs and outputs for livestock production are determined for dairy products, poultry, swine, beef, sheep, and aquaculture. Energy requirments for inshore and offshore fishing crafts (the case of the Northeast fishery) and energy production and consumption in wood harvest are presented.« less

  20. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    NASA Astrophysics Data System (ADS)

    Zhou, Yanlian; Wu, Xiaocui; Ju, Weimin; Chen, Jing M.; Wang, Shaoqiang; Wang, Huimin; Yuan, Wenping; Andrew Black, T.; Jassal, Rachhpal; Ibrom, Andreas; Han, Shijie; Yan, Junhua; Margolis, Hank; Roupsard, Olivier; Li, Yingnian; Zhao, Fenghua; Kiely, Gerard; Starr, Gregory; Pavelka, Marian; Montagnani, Leonardo; Wohlfahrt, Georg; D'Odorico, Petra; Cook, David; Arain, M. Altaf; Bonal, Damien; Beringer, Jason; Blanken, Peter D.; Loubet, Benjamin; Leclerc, Monique Y.; Matteucci, Giorgio; Nagy, Zoltan; Olejnik, Janusz; Paw U, Kyaw Tha; Varlagin, Andrej

    2016-04-01

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8 day GPP. Optimized maximum light use efficiency of shaded leaves (ɛmsh) was 2.63 to 4.59 times that of sunlit leaves (ɛmsu). Generally, the relationships of ɛmsh and ɛmsu with ɛmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems, and it is more robust with regard to usual biases in input data than existing approaches which neglect the bimodal within-canopy distribution of PAR.

  1. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites: TL-LUE Parameterization and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yanlian; Wu, Xiaocui; Ju, Weimin

    2016-04-06

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at 6 FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using datamore » from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8-day GPP. Optimized maximum light use efficiency of shaded leaves (εmsh) was 2.63 to 4.59 times that of sunlit leaves (εmsu). Generally, the relationships of εmsh and εmsu with εmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems and it is more robust with regard to usual biases in input data than existing approaches which neglect the bi-modal within-canopy distribution of PAR.« less

  2. Regional distribution of nitrogen fertilizer use and N-saving potential for improvement of food production and nitrogen use efficiency in China.

    PubMed

    Wang, Xiaobin; Cai, Dianxiong; Hoogmoed, Willem B; Oenema, Oene

    2011-08-30

    An apparently large disparity still exists between developed and developing countries in historical trends of the amounts of nitrogen (N) fertilizers consumed, and the same situation holds true in China. The situation of either N overuse or underuse has become one of the major limiting factors in agricultural production and economic development in China. The issue of food security in N-poor regions has been given the greatest attention internationally. Balanced and appropriate use of N fertilizer for enriching soil fertility is an effective step in preventing soil degradation, ensuring food security, and further contributing to poverty alleviation and rural economic development in the N-poor regions. Based on the China Statistical Yearbook (2007), there could be scope for improvement of N use efficiency (NUE) in N-rich regions by reducing N fertilizer input to an optimal level (≤180 kg N ha(-1)), and also potential for increasing yield in the N-poor regions by further increasing N fertilizer supply (up to 116 kg N ha(-1)). For the N-rich regions, the average estimated potential of N saving and NUE increase could be about 15% and 23%, respectively, while for the N-poor regions the average estimated potential for yield increase could be 21% on a regional scale, when N input is increased by 13%. The study suggests that to achieve the goals of regional yield improvement, it is necessary to readjust and optimize regional distribution of N fertilizer use between the N-poor and N-rich regions in China, in combination with other nutrient management practices. Copyright © 2011 Society of Chemical Industry.

  3. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  4. A controls engineering approach for analyzing airplane input-output characteristics

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  5. Effect of feed-related farm characteristics on relative values of genetic traits in dairy cows to reduce greenhouse gas emissions along the chain.

    PubMed

    Van Middelaar, C E; Berentsen, P B M; Dijkstra, J; Van Arendonk, J A M; De Boer, I J M

    2015-07-01

    Breeding has the potential to reduce greenhouse gas (GHG) emissions from dairy farming. Evaluating the effect of a 1-unit change (i.e., 1 genetic standard deviation improvement) in genetic traits on GHG emissions along the chain provides insight into the relative importance of genetic traits to reduce GHG emissions. Relative GHG values of genetic traits, however, might depend on feed-related farm characteristics. The objective of this study was to evaluate the effect of feed-related farm characteristics on GHG values by comparing the values of milk yield and longevity for an efficient farm and a less efficient farm. The less efficient farm did not apply precision feeding and had lower feed production per hectare than the efficient farm. Greenhouse gas values of milk yield and longevity were calculated by using a whole-farm model and 2 different optimization methods. Method 1 optimized farm management before and after a change in genetic trait by maximizing labor income; the effect on GHG emissions (i.e., from production of farm inputs up to the farm gate) was considered a side effect. Method 2 optimized farm management after a change in genetic trait by minimizing GHG emissions per kilogram of milk while maintaining labor income and milk production at least at the level before the change in trait; the effect on labor income was considered a side effect. Based on maximizing labor income (method 1), GHG values of milk yield and longevity were, respectively, 279 and 143kg of CO2 equivalents (CO2e)/unit change per cow per year on the less efficient farm, and 247 and 210kg of CO2e/unit change per cow per year on the efficient farm. Based on minimizing GHG emissions (method 2), GHG values of milk yield and longevity were, respectively, 538 and 563kg of CO2e/unit change per cow per year on the less efficient farm, and 453 and 441kg of CO2e/unit change per cow per year on the efficient farm. Sensitivity analysis showed that, for both methods, the absolute effect of a change in genetic trait depends on model inputs, including prices and emission factors. Substantial changes in relative importance between traits due to a change in model inputs occurred only in case of maximizing labor income. We concluded that assumptions regarding feed-related farm characteristics affect the absolute level of GHG values, as well as the relative importance of traits to reduce emissions when using a method based on maximizing labor income. This is because optimizing farm management based on maximizing labor income does not give any incentive for lowering GHG emissions. When using a method based on minimizing GHG emissions, feed-related farm characteristics affected the absolute level of the GHG values, but the relative importance of the traits scarcely changed: at each level of efficiency, milk yield and longevity were equally important. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Optimizing microwave photodetection: input-output theory

    NASA Astrophysics Data System (ADS)

    Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.

    2018-04-01

    High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.

  7. History matching through dynamic decision-making

    PubMed Central

    Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson

    2017-01-01

    History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a ‘learning-from-data’ approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark. PMID:28582413

  8. Correlation pattern recognition: optimal parameters for quality standards control of chocolate marshmallow candy

    NASA Astrophysics Data System (ADS)

    Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina

    2006-08-01

    This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.

  9. Process Design and Techno-economic Analysis for Materials to Treat Produced Waters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heimer, Brandon Walter; Paap, Scott M; Sasan, Koroush

    Significant quantities of water are produced during enhanced oil recovery making these “produced water” streams attractive candidates for treatment and reuse. However, high concentrations of dissolved silica raise the propensity for fouling. In this paper, we report the design and economic analysis for a new ion exchange process using calcined hydrotalcite (HTC) to remove silica from water. This process improves upon known technologies by minimizing sludge product, reducing process fouling, and lowering energy use. Process modeling outputs included raw material requirements, energy use, and the minimum water treatment price (MWTP). Monte Carlo simulations quantified the impact of uncertainty and variabilitymore » in process inputs on MWTP. These analyses showed that cost can be significantly reduced if the HTC materials are optimized. Specifically, R&D improving HTC reusability, silica binding capacity, and raw material price can reduce MWTP by 40%, 13%, and 20%, respectively. Optimizing geographic deployment further improves cost competitiveness.« less

  10. Simulation based optimized beam velocity in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Vignat, Frédéric; Béraud, Nicolas; Villeneuve, François

    2017-08-01

    Manufacturing good parts with additive technologies rely on melt pool dimension and temperature and are controlled by manufacturing strategies often decided on machine side. Strategies are built on beam path and variable energy input. Beam path are often a mix of contour and hatching strategies filling the contours at each slice. Energy input depend on beam intensity and speed and is determined from simple thermal models to control melt pool dimensions and temperature and ensure porosity free material. These models take into account variation in thermal environment such as overhanging surfaces or back and forth hatching path. However not all the situations are correctly handled and precision is limited. This paper proposes new method to determine energy input from full built chamber 3D thermal simulation. Using the results of the simulation, energy is modified to keep melt pool temperature in a predetermined range. The paper present first an experimental method to determine the optimal range of temperature. In a second part the method to optimize the beam speed from the simulation results is presented. Finally, the optimized beam path is tested in the EBM machine and built part are compared with part built with ordinary beam path.

  11. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.

    PubMed

    Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai

    2015-07-01

    The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.

  12. Optimally growing boundary layer disturbances in a convergent nozzle preceded by a circular pipe

    NASA Astrophysics Data System (ADS)

    Uzun, Ali; Davis, Timothy B.; Alvi, Farrukh S.; Hussaini, M. Yousuff

    2017-06-01

    We report the findings from a theoretical analysis of optimally growing disturbances in an initially turbulent boundary layer. The motivation behind this study originates from the desire to generate organized structures in an initially turbulent boundary layer via excitation by disturbances that are tailored to be preferentially amplified. Such optimally growing disturbances are of interest for implementation in an active flow control strategy that is investigated for effective jet noise control. Details of the optimal perturbation theory implemented in this study are discussed. The relevant stability equations are derived using both the standard decomposition and the triple decomposition. The chosen test case geometry contains a convergent nozzle, which generates a Mach 0.9 round jet, preceded by a circular pipe. Optimally growing disturbances are introduced at various stations within the circular pipe section to facilitate disturbance energy amplification upstream of the favorable pressure gradient zone within the convergent nozzle, which has a stabilizing effect on disturbance growth. Effects of temporal frequency, disturbance input and output plane locations as well as separation distance between output and input planes are investigated. The results indicate that optimally growing disturbances appear in the form of longitudinal counter-rotating vortex pairs, whose size can be on the order of several times the input plane mean boundary layer thickness. The azimuthal wavenumber, which represents the number of counter-rotating vortex pairs, is found to generally decrease with increasing separation distance. Compared to the standard decomposition, the triple decomposition analysis generally predicts relatively lower azimuthal wavenumbers and significantly reduced energy amplification ratios for the optimal disturbances.

  13. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  14. Factors affecting water balance and percolate production for a landfill in operation.

    PubMed

    Poulsen, Tjalfe G; Møoldrup, Per

    2005-02-01

    Percolate production and precipitation data for a full-scale landfill in operation measured over a 13-year period were used to evaluate the impact and importance of the hydrological conditions of landfill sections on the percolate production rates. Both active (open) and closed landfill sections were included in the evaluation. A simple top cover model requiring a minimum of input data was used to simulate the percolate production as a function of precipitation and landfill section hydrology. The results showed that changes over time in the hydrology of individual landfill sections (such as section closure or plantation of trees on top of closed sections) can change total landfill percolate production by more than 100%; thus, percolate production at an active landfill can be very different from percolate production at the same landfill after closure. Furthermore, plantation of willow on top of closed sections can increase the evapotranspiration rate thereby reducing percolate production rates by up to 47% compared to a grass cover. This process, however, depends upon the availability of water in the top layer, and so the evaporation rate will be less than optimal during the summer where soil-water contents in the top cover are low.

  15. The role of voice input for human-machine communication.

    PubMed Central

    Cohen, P R; Oviatt, S L

    1995-01-01

    Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803

  16. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    PubMed

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018 Institute of Physics and Engineering in Medicine.

  17. Multi-mode horn

    NASA Technical Reports Server (NTRS)

    Neilson, Jeffrey M. (Inventor)

    2002-01-01

    A horn has an input aperture and an output aperture, and comprises a conductive inner surface formed by rotating a curve about a central axis. The curve comprises a first arc having an input aperture end and a transition end, and a second arc having a transition end and an output aperture end. When rotated about the central axis, the first arc input aperture end forms an input aperture, and the second arc output aperture end forms an output aperture. The curve is then optimized to provide a mode conversion which maximizes the power transfer of input energy to the Gaussian mode at the output aperture.

  18. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  19. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  20. Real-time PCR probe optimization using design of experiments approach.

    PubMed

    Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F

    2016-03-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times.

  1. A new high dynamic range ROIC with smart light intensity control unit

    NASA Astrophysics Data System (ADS)

    Yazici, Melik; Ceylan, Omer; Shafique, Atia; Abbasi, Shahbaz; Galioglu, Arman; Gurbuz, Yasar

    2017-05-01

    This journal presents a new high dynamic range ROIC with smart pixel which consists of two pre-amplifiers that are controlled by a circuit inside the pixel. Each pixel automatically decides which pre-amplifier is used according to the incoming illumination level. Instead of using single pre-amplifier, two input pre-amplifiers, which are optimized for different signal levels, are placed inside each pixel. The smart circuit mechanism, which decides the best input circuit according to the incoming light level, is also designed for each pixel. In short, an individual pixel has the ability to select the best input amplifier circuit that performs the best/highest SNR for the incoming signal level. A 32 × 32 ROIC prototype chip is designed to demonstrate the concept in 0.18 μ m CMOS technology. The prototype is optimized for NIR and SWIR bands. Instead of a detector, process variation optimized current sources are placed inside the ROIC. The chip achieves minimum 8.6 e- input referred noise and 98.9 dB dynamic range. It has the highest dynamic range in the literature in terms of analog ROICs for SWIR band. It is operating in room temperature and power consumption is 2.8 μ W per pixel.

  2. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  3. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  4. Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays

    PubMed Central

    Salt, Julián; Guinaldo, María; Chacón, Jesús

    2018-01-01

    In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n-input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant. PMID:29747441

  5. Control Systems with Normalized and Covariance Adaptation by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T. (Inventor); Burken, John J. (Inventor); Hanson, Curtis E. (Inventor)

    2016-01-01

    Disclosed is a novel adaptive control method and system called optimal control modification with normalization and covariance adjustment. The invention addresses specifically to current challenges with adaptive control in these areas: 1) persistent excitation, 2) complex nonlinear input-output mapping, 3) large inputs and persistent learning, and 4) the lack of stability analysis tools for certification. The invention has been subject to many simulations and flight testing. The results substantiate the effectiveness of the invention and demonstrate the technical feasibility for use in modern aircraft flight control systems.

  6. On the use of ANN interconnection weights in optimal structural design

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Szewczyk, Z.

    1992-01-01

    The present paper describes the use of interconnection weights of a multilayer, feedforward network, to extract information pertinent to the mapping space that the network is assumed to represent. In particular, these weights can be used to determine an appropriate network architecture, and an adequate number of training patterns (input-output pairs) have been used for network training. The weight analysis also provides an approach to assess the influence of each input parameter on a selected output component. The paper shows the significance of this information in decomposition driven optimal design.

  7. Real-time edge-enhanced optical correlator

    NASA Technical Reports Server (NTRS)

    Liu, Tsuen-Hsi (Inventor); Cheng, Li-Jen (Inventor)

    1992-01-01

    Edge enhancement of an input image by four-wave mixing a first write beam with a second write beam in a photorefractive crystal, GaAs, was achieved for VanderLugt optical correlation with an edge enhanced reference image by optimizing the power ratio of a second write beam to the first write beam (70:1) and optimizing the power ratio of a read beam, which carries the reference image to the first write beam (100:701). Liquid crystal TV panels are employed as spatial light modulators to change the input and reference images in real time.

  8. Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays.

    PubMed

    Aranda-Escolástico, Ernesto; Salt, Julián; Guinaldo, María; Chacón, Jesús; Dormido, Sebastián

    2018-05-09

    In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n -input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant.

  9. Realization of optimized quantum controlled-logic gate based on the orbital angular momentum of light.

    PubMed

    Zeng, Qiang; Li, Tao; Song, Xinbing; Zhang, Xiangdong

    2016-04-18

    We propose and experimentally demonstrate an optimized setup to implement quantum controlled-NOT operation using polarization and orbital angular momentum qubits. This device is more adaptive to inputs with various polarizations, and can work both in classical and quantum single-photon regime. The logic operations performed by such a setup not only possess high stability and polarization-free character, they can also be easily extended to deal with multi-qubit input states. As an example, the experimental implementation of generalized three-qubit Toffoli gate has been presented.

  10. Multivariate data analysis on historical IPV production data for better process understanding and future improvements.

    PubMed

    Thomassen, Yvonne E; van Sprang, Eric N M; van der Pol, Leo A; Bakker, Wilfried A M

    2010-09-01

    Historical manufacturing data can potentially harbor a wealth of information for process optimization and enhancement of efficiency and robustness. To extract useful data multivariate data analysis (MVDA) using projection methods is often applied. In this contribution, the results obtained from applying MVDA on data from inactivated polio vaccine (IPV) production runs are described. Data from over 50 batches at two different production scales (700-L and 1,500-L) were available. The explorative analysis performed on single unit operations indicated consistent manufacturing. Known outliers (e.g., rejected batches) were identified using principal component analysis (PCA). The source of operational variation was pinpointed to variation of input such as media. Other relevant process parameters were in control and, using this manufacturing data, could not be correlated to product quality attributes. The gained knowledge of the IPV production process, not only from the MVDA, but also from digitalizing the available historical data, has proven to be useful for troubleshooting, understanding limitations of available data and seeing the opportunity for improvements. 2010 Wiley Periodicals, Inc.

  11. Simplex-based optimization of numerical and categorical inputs in early bioprocess development: Case studies in HT chromatography.

    PubMed

    Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy

    2017-08-01

    Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Impact of Fishery Policy on Fishery Manufacture Output, Economy and Welfare in Indonesia

    NASA Astrophysics Data System (ADS)

    Firmansyah; Oktavilia, Shanty; Sugiyanto, F. X.; Hamzah, Ibnu N.

    2018-02-01

    The fisheries sector and fish manufacturing industry are the bright prospect sectors of Indonesia, due to its huge potency, which has not been worked out optimally. In facts, these sectors can generate a large amount of foreign exchange. The Government has paid significant attention to the development of these sectors. This study simulates the impact of fishery policies on the production of fish manufacturing industry, national economic and welfare in Indonesia. By employing the Input-Output Analysis approach, impacts of various government policy scenarios are developed, covering fisheries technical policy, as well as infrastructure development policies in the fisheries sector. This study indicates that the policies in the fisheries sector increase the output of fishery, the production of fish manufacturing industry, the sectoral and national outputs, as well as the level of national income.

  13. Fixed-target hadron production experiments

    NASA Astrophysics Data System (ADS)

    Popov, Boris A.

    2015-08-01

    Results from fixed-target hadroproduction experiments (HARP, MIPP, NA49 and NA61/SHINE) as well as their implications for cosmic ray and neutrino physics are reviewed. HARP measurements have been used for predictions of neutrino beams in K2K and MiniBooNE/SciBooNE experiments and are also being used to improve predictions of the muon yields in EAS and of the atmospheric neutrino fluxes as well as to help in the optimization of neutrino factory and super-beam designs. Recent measurements released by the NA61/SHINE experiment are of significant importance for a precise prediction of the J-PARC neutrino beam used for the T2K experiment and for interpretation of EAS data. These hadroproduction experiments provide also a large amount of input for validation and tuning of hadron production models in Monte-Carlo generators.

  14. Improved assessment of gross and net primary productivity of Canada's landmass

    NASA Astrophysics Data System (ADS)

    Gonsamo, Alemu; Chen, Jing M.; Price, David T.; Kurz, Werner A.; Liu, Jane; Boisvenue, Céline; Hember, Robbie A.; Wu, Chaoyang; Chang, Kuo-Hsien

    2013-12-01

    assess Canada's gross primary productivity (GPP) and net primary productivity (NPP) using boreal ecosystem productivity simulator (BEPS) at 250 m spatial resolution with improved input parameter and driver fields and phenology and nutrient release parameterization schemes. BEPS is a process-based two-leaf enzyme kinetic terrestrial ecosystem model designed to simulate energy, water, and carbon (C) fluxes using spatial data sets of meteorology, remotely sensed land surface variables, soil properties, and photosynthesis and respiration rate parameters. Two improved key land surface variables, leaf area index (LAI) and land cover type, are derived at 250 m from Moderate Resolution Imaging Spectroradiometer sensor. For diagnostic error assessment, we use nine forest flux tower sites where all measured C flux, meteorology, and ancillary data sets are available. The errors due to input drivers and parameters are then independently corrected for Canada-wide GPP and NPP simulations. The optimized LAI use, for example, reduced the absolute bias in GPP from 20.7% to 1.1% for hourly BEPS simulations. Following the error diagnostics and corrections, daily GPP and NPP are simulated over Canada at 250 m spatial resolution, the highest resolution simulation yet for the country or any other comparable region. Total NPP (GPP) for Canada's land area was 1.27 (2.68) Pg C for 2008, with forests contributing 1.02 (2.2) Pg C. The annual comparisons between measured and simulated GPP show that the mean differences are not statistically significant (p > 0.05, paired t test). The main BEPS simulation error sources are from the driver fields.

  15. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  16. The DOPEX code: An application of the method of steepest descent to laminated-shield-weight optimization with several constraints

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.

  17. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE PAGES

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    2017-01-31

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  18. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  19. Consideration of plant behaviour in optimal servo-compensator design

    NASA Astrophysics Data System (ADS)

    Moase, W. H.; Manzie, C.

    2016-07-01

    Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.

  20. Provenance-aware optimization of workload for distributed data production

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2017-10-01

    Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). Previously we have developed a new job scheduling approach dedicated to distributed data production - an essential part of data processing in HENP (preprocessing in big data terminology). In this contribution, we discuss the load balancing with multiple data sources and data replication, present recent improvements made to our planner and provide results of simulations which demonstrate the advantage against standard scheduling policies for the new use case. Multi-source or provenance is common in computing models of many applications whereas the data may be copied to several destinations. The initial input data set would hence be already partially replicated to multiple locations and the task of the scheduler is to maximize overall computational throughput considering possible data movements and CPU allocation. The studies have shown that our approach can provide a significant gain in overall computational performance in a wide scope of simulations considering realistic size of computational Grid and various input data distribution.

  1. Large-area landslide susceptibility with optimized slope-units

    NASA Astrophysics Data System (ADS)

    Alvioli, Massimiliano; Marchesini, Ivan; Reichenbach, Paola; Rossi, Mauro; Ardizzone, Francesca; Fiorucci, Federica; Guzzetti, Fausto

    2017-04-01

    A Slope-Unit (SU) is a type of morphological terrain unit bounded by drainage and divide lines that maximize the within-unit homogeneity and the between-unit heterogeneity across distinct physical and geographical boundaries [1]. Compared to other terrain subdivisions, SU are morphological terrain unit well related to the natural (i.e., geological, geomorphological, hydrological) processes that shape and characterize natural slopes. This makes SU easily recognizable in the field or in topographic base maps, and well suited for environmental and geomorphological analysis, in particular for landslide susceptibility (LS) modelling. An optimal subdivision of an area into a set of SU depends on multiple factors: size and complexity of the study area, quality and resolution of the available terrain elevation data, purpose of the terrain subdivision, scale and resolution of the phenomena for which SU are delineated. We use the recently developed r.slopeunits software [2,3] for the automatic, parametric delineation of SU within the open source GRASS GIS based on terrain elevation data and a small number of user-defined parameters. The software provides subdivisions consisting of SU with different shapes and sizes, as a function of the input parameters. In this work, we describe a procedure for the optimal selection of the user parameters through the production of a large number of realizations of the LS model. We tested the software and the optimization procedure in a 2,000 km2 area in Umbria, Central Italy. For LS zonation we adopt a logistic regression model implemented in an well-known software [4,5], using about 50 independent variables. To select the optimal SU partition for LS zonation, we want to define a metric which is able to quantify simultaneously: (i) slope-unit internal homogeneity (ii) slope-unit external heterogeneity (iii) landslide susceptibility model performance. To this end, we define a comprehensive objective function S, as the product of three normalized objective functions dealing with the points (i)-(ii)-(iii) independently. We use an intra-segment variance function V, the Moran's autocorrelation index I and the AUCROC function R arising from the application of the logistic regression model. Maximization of the objective function S = f(I,V,R) as a function of the r.slopeunits input parameters provides an objective and reproducible way to select the optimal parameter combination for a proper SU subdivision for LS modelling. We further perform an analysis of the statistical significance of the LS models as a function of the r.slopeunits input parameters, focusing on the degree of coarseness of each subdivision. We find that the LRM, when applied to subdivisions with large average SU size, has a very poor statistical significance, resulting in only few (5%, typically lithological) variables being used in the regression due to the large heterogeneity of all variables within each unit, while up to 35% of the variables are used when SU are very small. This behavior was largely expected and provides further evidence that an objective method to select SU size is highly desirable. [1] Guzzetti, F. et al., Geomorphology 31, (1999) 181-216 [2] Alvioli, M. et al., Geoscientific Model Development 9 (2016), 3975-3991 [3] http://geomorphology.irpi.cnr.it/tools/slope-units [4] Rossi, M. et al., Geomorphology 114, (2010) 129-142 [5] Rossi, M. and Reichenbach, P., Geoscientific Model Development 9 (2016), 3533-3543

  2. The optimal ecological factors and the denitrification populationof a denitrifying process for sulfate reducing bacteriainhibition

    NASA Astrophysics Data System (ADS)

    Li, Chunying

    2018-02-01

    SRB have great negative impacts on the oil production in Daqing Oil field. A continuous-flow anaerobic baffled reactors (ABR) are applied to investigate the feasibility and optimal ecological factors for the inhibition of SRB by denitrifying bacteria (DNB). The results showed that the SO42- to NO3- concentration ratio (SO42-/NO3-) are the most important ecological factor. The input of NO3- and lower COD can enhance the inhibition of S2-production effectively. The effective time of sulfate reduction is 6 h. Complete inhibition of SRB is obtained when the influent COD concentration is 600 mg/L, the SO42-/NO3- is 1/1 (600 mg/L for each), N is added simultaneously in the 2# and the 5# ABR chambers. By extracting the total DNA of wastewater from the effective chamber, 16SrDNA clones of a bacterium had been constructed. It is showed that the Proteobacteria accounted for eighty- four percent of the total clones. The dominant species was the Neisseria. Sixteen percent of the total clones were the Bacilli of Frimicutes. It indicated that DNB was effective and feasible for SRB inhibition.

  3. Integration of artificial intelligence methods and life cycle assessment to predict energy output and environmental impacts of paddy production.

    PubMed

    Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing

    2018-08-01

    Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Data-Driven Zero-Sum Neuro-Optimal Control for a Class of Continuous-Time Unknown Nonlinear Systems With Disturbance Using ADP.

    PubMed

    Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei

    2016-02-01

    This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.

  5. Aerospace engineering design by systematic decomposition and multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Giles, G. L.; Barthelemy, J.-F. M.

    1984-01-01

    This paper describes a method for systematic analysis and optimization of large engineering systems, e.g., aircraft, by decomposition of a large task into a set of smaller, self-contained subtasks that can be solved concurrently. The subtasks may be arranged in many hierarchical levels with the assembled system at the top level. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization. It is pointed out that the method is intended to be compatible with the typical engineering organization and the modern technology of distributed computing.

  6. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  7. Research on torsional vibration modelling and control of printing cylinder based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Wang, Y. M.; Xu, W. C.; Wu, S. Q.; Chai, C. W.; Liu, X.; Wang, S. H.

    2018-03-01

    The torsional oscillation is the dominant vibration form for the impression cylinder of printing machine (printing cylinder for short), directly restricting the printing speed up and reducing the quality of the prints. In order to reduce torsional vibration, the active control method for the printing cylinder is obtained. Taking the excitation force and moment from the cylinder gap and gripper teeth open & closing cam mechanism as variable parameters, authors establish the dynamic mathematical model of torsional vibration for the printing cylinder. The torsional active control method is based on Particle Swarm Optimization(PSO) algorithm to optimize input parameters for the serve motor. Furthermore, the input torque of the printing cylinder is optimized, and then compared with the numerical simulation results. The conclusions are that torsional vibration active control based on PSO is an availability method to the torsional vibration of printing cylinder.

  8. Improving stability margins in discrete-time LQG controllers

    NASA Technical Reports Server (NTRS)

    Oranc, B. Tarik; Phillips, Charles L.

    1987-01-01

    Some of the problems are discussed which are encountered in the design of discrete-time stochastic controllers for problems that may adequately be described by the Linear Quadratic Gaussian (LQG) assumptions; namely, the problems of obtaining acceptable relative stability, robustness, and disturbance rejection properties. A dynamic compensator is proposed to replace the optimal full state feedback regulator gains at steady state, provided that all states are measurable. The compensator increases the stability margins at the plant input, which may possibly be inadequate in practical applications. Though the optimal regulator has desirable properties the observer based controller as implemented with a Kalman filter, in a noisy environment, has inadequate stability margins. The proposed compensator is designed to match the return difference matrix at the plant input to that of the optimal regulator while maintaining the optimality of the state estimates as directed by the measurement noise characteristics.

  9. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    PubMed

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  10. Prediction and optimization of the laccase-mediated synthesis of the antimicrobial compound iodine (I2).

    PubMed

    Schubert, M; Fey, A; Ihssen, J; Civardi, C; Schwarze, F W M R; Mourad, S

    2015-01-10

    An artificial neural network (ANN) and genetic algorithm (GA) were applied to improve the laccase-mediated oxidation of iodide (I(-)) to elemental iodine (I2). Biosynthesis of iodine (I2) was studied with a 5-level-4-factor central composite design (CCD). The generated ANN network was mathematically evaluated by several statistical indices and revealed better results than a classical quadratic response surface (RS) model. Determination of the relative significance of model input parameters, ranking the process parameters in order of importance (pH>laccase>mediator>iodide), was performed by sensitivity analysis. ANN-GA methodology was used to optimize the input space of the neural network model to find optimal settings for the laccase-mediated synthesis of iodine. ANN-GA optimized parameters resulted in a 9.9% increase in the conversion rate. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints

    NASA Astrophysics Data System (ADS)

    Yang, Xiong; Liu, Derong; Wang, Ding

    2014-03-01

    In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.

  12. Greenhouse gas impacts of declining hydrocarbon resource quality: Depletion, dynamics, and process emissions

    NASA Astrophysics Data System (ADS)

    Brandt, Adam Robert

    This dissertation explores the environmental and economic impacts of the transition to hydrocarbon substitutes for conventional petroleum (SCPs). First, mathematical models of oil depletion are reviewed, including the Hubbert model, curve-fitting methods, simulation models, and economic models. The benefits and drawbacks of each method are outlined. I discuss the predictive value of the models and our ability to determine if one model type works best. I argue that forecasting oil depletion without also including substitution with SCPs results in unrealistic projections of future energy supply. I next use information theoretic techniques to test the Hubbert model of oil depletion against five other asymmetric and symmetric curve-fitting models using data from 139 oil producing regions. I also test the assumptions that production curves are symmetric and that production is more bell-shaped in larger regions. Results show that if symmetry is enforced, Gaussian production curves perform best, while if asymmetry is allowed, asymmetric exponential models prove most useful. I also find strong evidence for asymmetry: production declines are consistently less steep than inclines. In order to understand the impacts of oil depletion on GHG emissions, I developed the Regional Optimization Model for Emissions from Oil Substitutes (ROMEO). ROMEO is an economic optimization model of investment and production of fuels. Results indicate that incremental emissions (with demand held constant) from SCPs could be 5-20 GtC over the next 50 years. These results are sensitive to the endowment of conventional oil and not sensitive to a carbon tax. If demand can vary, total emissions could decline under a transition because the higher cost of SCPs lessens overall fuel consumption. Lastly, I study the energetic and environmental characteristics of the in situ conversion process, which utilizes electricity to generate liquid hydrocarbons from oil shale. I model the energy inputs and outputs from the ICP use them to calculate the GHG emissions from the ICP. Energy outputs (as refined liquid fuel) range from 1.2 to 1.6 times the total primary energy inputs. Well-to-tank greenhouse gas emissions range from 30.6 to 37.1 gCeq./MJ of final fuel delivered, 21 to 47% larger than those from conventionally produced petroleum-based fuels.

  13. Full-order optimal compensators for flow control: the multiple inputs case

    NASA Astrophysics Data System (ADS)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  14. The Development and Use of a Flight Optimization System Model of a C-130E Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Desch, Jeremy D.

    1995-01-01

    The Systems Analysis Branch at NASA Langley Research Center conducts a variety of aircraft design and analyses studies. These studies include the prediction of characteristics of a particular conceptual design, analyses of designs that already exist, and assessments of the impact of technology on current and future aircraft. The FLight OPtimization System (FLOPS) is a tool used for aircraft systems analysis and design. A baseline input model of a Lockheed C-130E was generated for the Flight Optimization System. This FLOPS model can be used to conduct design-trade studies and technology impact assessments. The input model was generated using standard input data such as basic geometries and mission specifications. All of the other data needed to determine the airplane performance is computed internally by FLOPS. The model was then calibrated to reproduce the actual airplane performance from flight test data. This allows a systems analyzer to change a specific item of geometry or mission definition in the FLOPS input file and evaluate the resulting change in performance from the output file. The baseline model of the C-130E was used to analyze the effects of implementing upper wing surface blowing on the airplane. This involved removing the turboprop engines that were on the C-130E and replacing them with turbofan engines. An investigation of the improvements in airplane performance with the new engines could be conducted within the Flight Optimization System. Although a thorough analysis was not completed, the impact of this change on basic mission performance was investigated.

  15. A study of remote sensing as applied to regional and small watersheds. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.

    1974-01-01

    The accuracy of remotely sensed measurements to provide inputs to hydrologic models of watersheds is studied. A series of sensitivity analyses on continuous simulation models of three watersheds determined: (1)Optimal values and permissible tolerances of inputs to achieve accurate simulation of streamflow from the watersheds; (2) Which model inputs can be quantified from remote sensing, directly, indirectly or by inference; and (3) How accurate remotely sensed measurements (from spacecraft or aircraft) must be to provide a basis for quantifying model inputs within permissible tolerances.

  16. Optimization of DSC MRI Echo Times for CBV Measurements Using Error Analysis in a Pilot Study of High-Grade Gliomas.

    PubMed

    Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C

    2017-09-01

    The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P < .001). Greater heterogeneity was observed in the optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.

  17. Teleportation of squeezing: Optimization using non-Gaussian resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio

    2010-12-15

    We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell'Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. A 76, 022301 (2007); F. Dell'Anno, S. Demore » Siena, and F. Illuminati, ibid. 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.« less

  18. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    PubMed Central

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  19. Utilizing a Tower Based System for Optical Sensing of Ecosystem Carbon Fluxes

    NASA Astrophysics Data System (ADS)

    Huemmrich, K. F.; Corp, L. A.; Middleton, E.; Campbell, P. K. E.; Landis, D.; Kustas, W. P.

    2015-12-01

    Optical sampling of spectral reflectance and solar induced fluorescence provide information on the physiological status of vegetation that can be used to infer stress responses and estimates of production. Multiple repeated observations are required to observe the effects of changing environmental conditions on vegetation. This study examines the use of optical signals to determine inputs to a light use efficiency (LUE) model describing productivity of a cornfield where repeated observations of carbon flux, spectral reflectance and fluorescence were collected. Data were collected at the Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) fields (39.03°N, 76.85°W) at USDA Beltsville Agricultural Research Center. Agricultural Research Service researchers measured CO2 fluxes using eddy covariance methods throughout the growing season. Optical measurements were made from the nearby tower supporting the NASA FUSION sensors. The sensor system consists of two dual channel, upward and downward looking, spectrometers used to simultaneously collect high spectral resolution measurements of reflected and fluoresced light from vegetation canopies at multiple view angles. Estimates of chlorophyll fluorescence, combined with measures of vegetation pigment content and the Photosynthetic Reflectance Index (PRI) derived from the spectral reflectance are compared with CO2 fluxes over diurnal periods for multiple days. The relationships among the different optical measurements indicate that they are providing different types of information on the vegetation and that combinations of these measurements provide improved retrievals of CO2 fluxes than any index alone

  20. Field Research Facility Data Integration Framework Data Management Plan: Survey Lines Dataset

    DTIC Science & Technology

    2016-08-01

    CHL and its District partners. The beach morphology surveys on which this report focuses provide quantitative measures of the dynamic nature of...topography • volume change 1.4 Data description The morphology surveys are conducted over a series of 26 shore- perpendicular profile lines spaced 50...dataset input data and products. Table 1. FRF survey lines dataset input data and products. Input Data FDIF Product Description ASCII LARC survey text

  1. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  2. Optimal input selection for neural machine interfaces predicting multiple non-explicit outputs.

    PubMed

    Krepkovich, Eileen T; Perreault, Eric J

    2008-01-01

    This study implemented a novel algorithm that optimally selects inputs for neural machine interface (NMI) devices intended to control multiple outputs and evaluated its performance on systems lacking explicit output. NMIs often incorporate signals from multiple physiological sources and provide predictions for multidimensional control, leading to multiple-input multiple-output systems. Further, NMIs often are used with subjects who have motor disabilities and thus lack explicit motor outputs. Our algorithm was tested on simulated multiple-input multiple-output systems and on electromyogram and kinematic data collected from healthy subjects performing arm reaches. Effects of output noise in simulated systems indicated that the algorithm could be useful for systems with poor estimates of the output states, as is true for systems lacking explicit motor output. To test efficacy on physiological data, selection was performed using inputs from one subject and outputs from a different subject. Selection was effective for these cases, again indicating that this algorithm will be useful for predictions where there is no motor output, as often is the case for disabled subjects. Further, prediction results generalized for different movement types not used for estimation. These results demonstrate the efficacy of this algorithm for the development of neural machine interfaces.

  3. Earth observing system. Output data products and input requirements, version 2.0. Volume 2: Analysis of IDS input requirements

    NASA Technical Reports Server (NTRS)

    Lu, Yun-Chi; Chang, Hyo Duck; Krupp, Brian; Kumar, Ravindra; Swaroop, Anand

    1992-01-01

    On 18 Jan. 1991, NASA confirmed 29 Inter-Disciplinary Science (IDS) teams, each involving a group of investigators, to conduct interdisciplinary research using data products from Earth Observing System (EOS) instruments. These studies are multi-disciplinary and require output data products from multiple EOS instruments, including both FI and PI instruments. The purpose of this volume is to provide information on output products expected from IDS investigators, required input data, and retrieval algorithms. Also included in this volume is the revised analysis of the 'best' and 'alternative' match data products for IDS input requirements. The original analysis presented in the August 1991 release of the SPSO Report was revised to incorporate the restructuring of the EOS platform. As a result of the reduced EOS payload, some of EOS instruments were deselected and their data products would not be available for IDS research. Information on these data products is also presented.

  4. Crossmodal binding rivalry: A "race" for integration between unequal sensory inputs.

    PubMed

    Kostaki, Maria; Vatakis, Argiro

    2016-10-01

    Exposure to multiple but unequal (in number) sensory inputs often leads to illusory percepts, which may be the product of a conflict between those inputs. To test this conflict, we utilized the classic sound induced visual fission and fusion illusions under various temporal configurations and timing presentations. This conflict between unequal numbers of sensory inputs (i.e., crossmodal binding rivalry) depends on the binding of the first audiovisual pair and its temporal proximity to the upcoming unisensory stimulus. We, therefore, expected that tight coupling of the first audiovisual pair would lead to higher rivalry with the upcoming unisensory stimulus and, thus, weaker illusory percepts. Loose coupling, on the other hand, would lead to lower rivalry and higher illusory percepts. Our data showed the emergence of two different participant groups, those with low discrimination performance and strong illusion reports (particularly for fusion) and those with the exact opposite pattern, thus extending previous findings on the effect of visual acuity in the strength of the illusion. Most importantly, our data revealed differential illusory strength across different temporal configurations for the fission illusion, while for the fusion illusion these effects were only noted for the largest stimulus onset asynchronies tested. These findings support that the optimal integration theory for the double flash illusion should be expanded so as to also take into account the multisensory temporal interactions of the stimuli presented (i.e., temporal sequence and configuration). Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. CHARMM-GUI Membrane Builder toward realistic biological membrane simulations.

    PubMed

    Wu, Emilia L; Cheng, Xi; Jo, Sunhwan; Rui, Huan; Song, Kevin C; Dávila-Contreras, Eder M; Qi, Yifei; Lee, Jumin; Monje-Galvan, Viviana; Venable, Richard M; Klauda, Jeffery B; Im, Wonpil

    2014-10-15

    CHARMM-GUI Membrane Builder, http://www.charmm-gui.org/input/membrane, is a web-based user interface designed to interactively build all-atom protein/membrane or membrane-only systems for molecular dynamics simulations through an automated optimized process. In this work, we describe the new features and major improvements in Membrane Builder that allow users to robustly build realistic biological membrane systems, including (1) addition of new lipid types, such as phosphoinositides, cardiolipin (CL), sphingolipids, bacterial lipids, and ergosterol, yielding more than 180 lipid types, (2) enhanced building procedure for lipid packing around protein, (3) reliable algorithm to detect lipid tail penetration to ring structures and protein surface, (4) distance-based algorithm for faster initial ion displacement, (5) CHARMM inputs for P21 image transformation, and (6) NAMD equilibration and production inputs. The robustness of these new features is illustrated by building and simulating a membrane model of the polar and septal regions of E. coli membrane, which contains five lipid types: CL lipids with two types of acyl chains and phosphatidylethanolamine lipids with three types of acyl chains. It is our hope that CHARMM-GUI Membrane Builder becomes a useful tool for simulation studies to better understand the structure and dynamics of proteins and lipids in realistic biological membrane environments. Copyright © 2014 Wiley Periodicals, Inc.

  6. Model-Based Analysis of the Long-Term Effects of Fertilization Management on Cropland Soil Acidification.

    PubMed

    Zeng, Mufan; de Vries, Wim; Bonten, Luc T C; Zhu, Qichao; Hao, Tianxiang; Liu, Xuejun; Xu, Minggang; Shi, Xiaojun; Zhang, Fusuo; Shen, Jianbo

    2017-04-04

    Agricultural soil acidification in China is known to be caused by the over-application of nitrogen (N) fertilizers, but the long-term impacts of different fertilization practices on intensive cropland soil acidification are largely unknown. Here, we further developed the soil acidification model VSD+ for intensive agricultural systems and validated it against observed data from three long-term fertilization experiments in China. The model simulated well the changes in soil pH and base saturation over the last 20 years. The validated model was adopted to quantify the contribution of N and base cation (BC) fluxes to soil acidification. The net NO 3 - leaching and NO 4 + input accounted for 80% of the proton production under N application, whereas one-third of acid was produced by BC uptake when N was not applied. The simulated long-term (1990-2050) effects of different fertilizations on soil acidification showed that balanced N application combined with manure application avoids reduction of both soil pH and base saturation, while application of calcium nitrate and liming increases these two soil properties. Reducing NH 4 + input and NO 3 - leaching by optimizing N management and increasing BC inputs by manure application thus already seem to be effective approaches to mitigating soil acidification in intensive cropland systems.

  7. An exergy approach to efficiency evaluation of desalination

    NASA Astrophysics Data System (ADS)

    Ng, Kim Choon; Shahzad, Muhammad Wakil; Son, Hyuk Soo; Hamed, Osman A.

    2017-05-01

    This paper presents an evaluation process efficiency based on the consumption of primary energy for all types of practical desalination methods available hitherto. The conventional performance ratio has, thus far, been defined with respect to the consumption of derived energy, such as the electricity or steam, which are susceptible to the conversion losses of power plants and boilers that burned the input primary fuels. As derived energies are usually expressed by the units, either kWh or Joules, these units cannot differentiate the grade of energy supplied to the processes accurately. In this paper, the specific energy consumption is revisited for the efficacy of all large-scale desalination plants. In today's combined production of electricity and desalinated water, accomplished with advanced cogeneration concept, the input exergy of fuels is utilized optimally and efficiently in a temperature cascaded manner. By discerning the exergy destruction successively in the turbines and desalination processes, the relative contribution of primary energy to the processes can be accurately apportioned to the input primary energy. Although efficiency is not a law of thermodynamics, however, a common platform for expressing the figures of merit explicit to the efficacy of desalination processes can be developed meaningfully that has the thermodynamic rigor up to the ideal or thermodynamic limit of seawater desalination for all scientists and engineers to aspire to.

  8. Fouling resistance prediction using artificial neural network nonlinear auto-regressive with exogenous input model based on operating conditions and fluid properties correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biyanto, Totok R.

    Fouling in a heat exchanger in Crude Preheat Train (CPT) refinery is an unsolved problem that reduces the plant efficiency, increases fuel consumption and CO{sub 2} emission. The fouling resistance behavior is very complex. It is difficult to develop a model using first principle equation to predict the fouling resistance due to different operating conditions and different crude blends. In this paper, Artificial Neural Networks (ANN) MultiLayer Perceptron (MLP) with input structure using Nonlinear Auto-Regressive with eXogenous (NARX) is utilized to build the fouling resistance model in shell and tube heat exchanger (STHX). The input data of the model aremore » flow rates and temperatures of the streams of the heat exchanger, physical properties of product and crude blend data. This model serves as a predicting tool to optimize operating conditions and preventive maintenance of STHX. The results show that the model can capture the complexity of fouling characteristics in heat exchanger due to thermodynamic conditions and variations in crude oil properties (blends). It was found that the Root Mean Square Error (RMSE) are suitable to capture the nonlinearity and complexity of the STHX fouling resistance during phases of training and validation.« less

  9. Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model

    DTIC Science & Technology

    2017-03-01

    set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation

  10. Adaptation of SUBSTOR for controlled-environment potato production with elevated carbon dioxide

    NASA Technical Reports Server (NTRS)

    Fleisher, D. H.; Cavazzoni, J.; Giacomelli, G. A.; Ting, K. C.; Janes, H. W. (Principal Investigator)

    2003-01-01

    The SUBSTOR crop growth model was adapted for controlled-environment hydroponic production of potato (Solanum tuberosum L. cv. Norland) under elevated atmospheric carbon dioxide concentration. Adaptations included adjustment of input files to account for cultural differences between the field and controlled environments, calibration of genetic coefficients, and adjustment of crop parameters including radiation use efficiency. Source code modifications were also performed to account for the absorption of light reflected from the surface below the crop canopy, an increased leaf senescence rate, a carbon (mass) balance to the model, and to modify the response of crop growth rate to elevated atmospheric carbon dioxide concentration. Adaptations were primarily based on growth and phenological data obtained from growth chamber experiments at Rutgers University (New Brunswick, N.J.) and from the modeling literature. Modified-SUBSTOR predictions were compared with data from Kennedy Space Center's Biomass Production Chamber for verification. Results show that, with further development, modified-SUBSTOR will be a useful tool for analysis and optimization of potato growth in controlled environments.

  11. Techno-economic analysis of ethanol production from sugarcane bagasse using a Liquefaction plus Simultaneous Saccharification and co-Fermentation process.

    PubMed

    Gubicza, Krisztina; Nieves, Ismael U; Sagues, William J; Barta, Zsolt; Shanmugam, K T; Ingram, Lonnie O

    2016-05-01

    A techno-economic analysis was conducted for a simplified lignocellulosic ethanol production process developed and proven by the University of Florida at laboratory, pilot, and demonstration scales. Data obtained from all three scales of development were used with Aspen Plus to create models for an experimentally-proven base-case and 5 hypothetical scenarios. The model input parameters that differed among the hypothetical scenarios were fermentation time, enzyme loading, enzymatic conversion, solids loading, and overall process yield. The minimum ethanol selling price (MESP) varied between 50.38 and 62.72 US cents/L. The feedstock and the capital cost were the main contributors to the production cost, comprising between 23-28% and 40-49% of the MESP, respectively. A sensitivity analysis showed that overall ethanol yield had the greatest effect on the MESP. These findings suggest that future efforts to increase the economic feasibility of a cellulosic ethanol process should focus on optimization for highest ethanol yield. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Functional Differences between Statistical Learning with and without Explicit Training

    ERIC Educational Resources Information Center

    Batterink, Laura J.; Reber, Paul J.; Paller, Ken A.

    2015-01-01

    Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and…

  13. The optimal input optical pulse shape for the self-phase modulation based chirp generator

    NASA Astrophysics Data System (ADS)

    Zachinyaev, Yuriy; Rumyantsev, Konstantin

    2018-04-01

    The work is aimed to obtain the optimal shape of the input optical pulse for the proper functioning of the self-phase modulation based chirp generator allowing to achieve high values of chirp frequency deviation. During the research, the structure of the device based on self-phase modulation effect using has been analyzed. The influence of the input optical pulse shape of the transmitting optical module on the chirp frequency deviation has been studied. The relationship between the frequency deviation of the generated chirp and frequency linearity for the three options for implementation of the pulse shape has been also estimated. The results of research are related to the development of the theory of radio processors based on fiber-optic structures and can be used in radars, secure communications, geolocation and tomography.

  14. Inspection logistics planning for multi-stage production systems with applications to semiconductor fabrication lines

    NASA Astrophysics Data System (ADS)

    Chen, Kyle Dakai

    Since the market for semiconductor products has become more lucrative and competitive, research into improving yields for semiconductor fabrication lines has lately received a tremendous amount of attention. One of the most critical tasks in achieving such yield improvements is to plan the in-line inspection sampling efficiently so that any potential yield problems can be detected early and eliminated quickly. We formulate a multi-stage inspection planning model based on configurations in actual semiconductor fabrication lines, specifically taking into account both the capacity constraint and the congestion effects at the inspection station. We propose a new mixed First-Come-First-Serve (FCFS) and Last-Come-First-Serve (LCFS) discipline for serving the inspection samples to expedite the detection of potential yield problems. Employing this mixed FCFS and LCFS discipline, we derive approximate expressions for the queueing delays in yield problem detection time and develop near-optimal algorithms to obtain the inspection logistics planning policies. We also investigate the queueing performance with this mixed type of service discipline under different assumptions and configurations. In addition, we conduct numerical tests and generate managerial insights based on input data from actual semiconductor fabrication lines. To the best of our knowledge, this research is novel in developing, for the first time in the literature, near-optimal results for inspection logistics planning in multi-stage production systems with congestion effects explicitly considered.

  15. Integrated Modeling Approach for Optimal Management of Water, Energy and Food Security Nexus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaodong; Vesselinov, Velimir Valentinov

    We report that water, energy and food (WEF) are inextricably interrelated. Effective planning and management of limited WEF resources to meet current and future socioeconomic demands for sustainable development is challenging. WEF production/delivery may also produce environmental impacts; as a result, green-house-gas emission control will impact WEF nexus management as well. Nexus management for WEF security necessitates integrated tools for predictive analysis that are capable of identifying the tradeoffs among various sectors, generating cost-effective planning and management strategies and policies. To address these needs, we have developed an integrated model analysis framework and tool called WEFO. WEFO provides a multi-periodmore » socioeconomic model for predicting how to satisfy WEF demands based on model inputs representing productions costs, socioeconomic demands, and environmental controls. WEFO is applied to quantitatively analyze the interrelationships and trade-offs among system components including energy supply, electricity generation, water supply-demand, food production as well as mitigation of environmental impacts. WEFO is demonstrated to solve a hypothetical nexus management problem consistent with real-world management scenarios. Model parameters are analyzed using global sensitivity analysis and their effects on total system cost are quantified. Lastly, the obtained results demonstrate how these types of analyses can be helpful for decision-makers and stakeholders to make cost-effective decisions for optimal WEF management.« less

  16. Integrated Modeling Approach for Optimal Management of Water, Energy and Food Security Nexus

    DOE PAGES

    Zhang, Xiaodong; Vesselinov, Velimir Valentinov

    2016-12-28

    We report that water, energy and food (WEF) are inextricably interrelated. Effective planning and management of limited WEF resources to meet current and future socioeconomic demands for sustainable development is challenging. WEF production/delivery may also produce environmental impacts; as a result, green-house-gas emission control will impact WEF nexus management as well. Nexus management for WEF security necessitates integrated tools for predictive analysis that are capable of identifying the tradeoffs among various sectors, generating cost-effective planning and management strategies and policies. To address these needs, we have developed an integrated model analysis framework and tool called WEFO. WEFO provides a multi-periodmore » socioeconomic model for predicting how to satisfy WEF demands based on model inputs representing productions costs, socioeconomic demands, and environmental controls. WEFO is applied to quantitatively analyze the interrelationships and trade-offs among system components including energy supply, electricity generation, water supply-demand, food production as well as mitigation of environmental impacts. WEFO is demonstrated to solve a hypothetical nexus management problem consistent with real-world management scenarios. Model parameters are analyzed using global sensitivity analysis and their effects on total system cost are quantified. Lastly, the obtained results demonstrate how these types of analyses can be helpful for decision-makers and stakeholders to make cost-effective decisions for optimal WEF management.« less

  17. RBT-GA: a novel metaheuristic for solving the Multiple Sequence Alignment problem.

    PubMed

    Taheri, Javid; Zomaya, Albert Y

    2009-07-07

    Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences.

  18. Using Optimization to Improve Test Planning

    DTIC Science & Technology

    2017-09-01

    friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool

  19. Optimization of porthole die geometrical variables by Taguchi method

    NASA Astrophysics Data System (ADS)

    Gagliardi, F.; Ciancio, C.; Ambrogio, G.; Filice, L.

    2017-10-01

    Porthole die extrusion is commonly used to manufacture hollow profiles made of lightweight alloys for numerous industrial applications. The reliability of extruded parts is affected strongly by the quality of the longitudinal and transversal seam welds. According to that, the die geometry must be designed correctly and the process parameters must be selected properly to achieve the desired product quality. In this study, numerical 3D simulations have been created and run to investigate the role of various geometrical variables on punch load and maximum pressure inside the welding chamber. These are important outputs to take into account affecting, respectively, the necessary capacity of the extrusion press and the quality of the welding lines. The Taguchi technique has been used to reduce the number of the required numerical simulations necessary for considering the influence of twelve different geometric variables. Moreover, the Analysis of variance (ANOVA) has been implemented to individually analyze the effect of each input parameter on the two responses. Then, the methodology has been utilized to determine the optimal process configuration individually optimizing the two investigated process outputs. Finally, the responses of the optimized parameters have been verified through finite element simulations approximating the predicted value closely. This study shows the feasibility of the Taguchi technique for predicting performance, optimization and therefore for improving the design of a porthole extrusion process.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartkiewicz, Karol; Miranowicz, Adam

    We find an optimal quantum cloning machine, which clones qubits of arbitrary symmetrical distribution around the Bloch vector with the highest fidelity. The process is referred to as phase-independent cloning in contrast to the standard phase-covariant cloning for which an input qubit state is a priori better known. We assume that the information about the input state is encoded in an arbitrary axisymmetric distribution (phase function) on the Bloch sphere of the cloned qubits. We find analytical expressions describing the optimal cloning transformation and fidelity of the clones. As an illustration, we analyze cloning of qubit state described by themore » von Mises-Fisher and Brosseau distributions. Moreover, we show that the optimal phase-independent cloning machine can be implemented by modifying the mirror phase-covariant cloning machine for which quantum circuits are known.« less

  1. Optimization of input parameters of acoustic-transfection for the intracellular delivery of macromolecules using FRET-based biosensors

    NASA Astrophysics Data System (ADS)

    Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.

    2016-03-01

    Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.

  2. U.S. EPA CSO CAPSTONE REPORT: CONTROL SYSTEM OPTIMIZATION

    EPA Science Inventory

    An optimized combined sewer overflow (CSO) requires a storage treatment system because storm flow in the combined sewer system is intermittent and highly variable in both pollutant concentration and flow rate. Storage and treatment alternatives are strongly influenced by input...

  3. Evolving a Method to Capture Science Stakeholder Inputs to Optimize Instrument, Payload, and Program Design

    NASA Astrophysics Data System (ADS)

    Clark, P. E.; Rilee, M. L.; Curtis, S. A.; Bailin, S.

    2012-03-01

    We are developing Frontier, a highly adaptable, stably reconfigurable, web-accessible intelligent decision engine capable of optimizing design as well as the simulating operation of complex systems in response to evolving needs and environment.

  4. 76 FR 51879 - Definition of Solid Waste Disposal Facilities for Tax-Exempt Bond Purposes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... input for processing in some stage of a manufacturing or production process to produce a different end... and sold on the market as a material for input into manufacturing or production processes. The... production of any agricultural, commercial, consumer, or industrial product, provided that material qualified...

  5. Energy balance and emissions associated with biochar sequestration and pyrolysis bioenergy production.

    PubMed

    Gaunt, John L; Lehmann, Johannes

    2008-06-01

    The implications for greenhouse gas emissions of optimizing a slow pyrolysis-based bioenergy system for biochar and energy production rather than solely for energy production were assessed. Scenarios for feedstock production were examined using a life-cycle approach. We considered both purpose grown bioenergy crops (BEC) and the use of crop wastes (CW) as feedstocks. The BEC scenarios involved a change from growing winter wheat to purpose grown miscanthus, switchgrass, and corn as bioenergy crops. The CW scenarios consider both corn stover and winter wheat straw as feedstocks. Our findings show that the avoided emissions are between 2 and 5 times greater when biochar is applied to agricultural land (2--19 Mg CO2 ha(-1) y(-1)) than used solely for fossil energy offsets. 41--64% of these emission reductions are related to the retention of C in biochar, the rest to offsetting fossil fuel use for energy, fertilizer savings, and avoided soil emissions other than CO2. Despite a reduction in energy output of approximately 30% where the slow pyrolysis technology is optimized to produce biochar for land application, the energy produced per unit energy input at 2--7 MJ/MJ is greater than that of comparable technologies such as ethanol from corn. The C emissions per MWh of electricity production range from 91-360 kg CO2 MWh(-1), before accounting for C offset due to the use of biochar are considerably below the lifecycle emissions associated with fossil fuel use for electricity generation (600-900 kg CO2 MWh(-1)). Low-temperature slow pyrolysis offers an energetically efficient strategy for bioenergy production, and the land application of biochar reduces greenhouse emissions to a greater extent than when the biochar is used to offset fossil fuel emissions.

  6. Evolutionary optimization of radial basis function classifiers for data mining applications.

    PubMed

    Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard

    2005-10-01

    In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.

  7. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  8. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  9. Comparison of liquid hot water and alkaline pretreatments of giant reed for improved enzymatic digestibility and biogas energy production.

    PubMed

    Jiang, Danping; Ge, Xumeng; Zhang, Quanguo; Li, Yebo

    2016-09-01

    Liquid hot water (LHW) and alkaline pretreatments of giant reed biomass were compared in terms of digestibility, methane production, and cost-benefit efficiency for electricity generation via anaerobic digestion with a combined heat and power system. Compared to LHW pretreatment, alkaline pretreatment retained more of the dry matter in giant reed biomass solids due to less severe conditions. Under their optimal conditions, LHW pretreatment (190°C, 15min) and alkaline pretreatment (20g/L of NaOH, 24h) improved glucose yield from giant reed by more than 2-fold, while only the alkaline pretreatment significantly (p<0.05) increased cumulative methane yield (by 63%) over that of untreated biomass (217L/kgVS). LHW pretreatment obtained negative net electrical energy production due to high energy input. Alkaline pretreatment achieved 27% higher net electrical energy production than that of non-pretreatment (3859kJ/kg initial total solids), but alkaline liquor reuse is needed for improved net benefit. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, J.E.; Weathers, P.J.; McConville, F.X.

    Apple pomace (the pulp residue from pressing apple juice) is an abundant waste product and presents an expensive disposal problem. A typical (50,000 gal. juice/day) apple juice company in central Massachusetts produces 100 tons of pomace per day. Some of it is used as pig feed, but it is poor quality feed because of its low protein content. Most of the pomace is hauled away (at a cost of $4/ton) and landfilled (at a cost of $10/ton). If 5% (w/w) conversion of pomace to ethanol could be achieved, the need for this company to purchase No. 6 fuel oil (1000more » gal/day) for cooking during processing would be eliminated. Our approach was to saccharify the pomace enzymatically, and then to carry out a yeast fermentation on the hydrolysate. We chose to use enzymatic hydrolysis instead of dilute acid hydrolysis in order to minimize pH control problems both in the fermentation phase and in the residue. The only chemical studies have concerned small subfractions of apple material: for example, cell walls have been analyzed but they constitute only 1 to 2% of the fresh weight of the apple (about 15 to 30% of the pomace fraction). Therefore, our major problems were: (1) to optimize hydrolysis by enzyme mixtures, using weight loss and ultimate ethanol production as optimization criteria; (2) to optimize ethanol production from the hydrolysate by judicious choice of yeast strains and fermentation conditions; and (3) achieve these optimizations consistent with minimum processing cost and energy input. We have obtained up to 5.1% (w/w) of ethanol without saccharification. We show here that hydrolysis with high levels of enzyme can enhance ethanol yield by up to 27%, to a maximum level of 6% (w/w); however, enzyme treament may be cost-effective only a low levels, for improvement of residue compaction. 3 figures, 4 tables.« less

  11. Degradation of ciprofloxacin antibiotic by Homogeneous Fenton oxidation: Hybrid AHP-PROMETHEE method, optimization, biodegradability improvement and identification of oxidized by-products.

    PubMed

    Salari, Marjan; Rakhshandehroo, Gholam Reza; Nikoo, Mohammad Reza

    2018-09-01

    The main purpose of this experimental study was to optimize Homogeneous Fenton oxidation (HFO) and identification of oxidized by-products from degradation of Ciprofloxacin (CIP) using hybrid AHP-PROMETHEE, Response Surface Methodology (RSM) and High Performance Liquid Chromatography coupled with Mass Spectrometry (HPLC-MS). At the first step, an assessment was made for performances of two catalysts (FeSO 4 ·7H 2 O and FeCl 2 ·4H 2 O) based on hybrid AHP-PROMETHEE decision making method. Then, RSM was utilized to examine and optimize the influence of different variables including initial CIP concentration, Fe 2+ concentration, [H 2 O 2 ]/[ Fe 2+ ] mole ratio and initial pH as independent variables on CIP removal, COD removal, and sludge to iron (SIR) as the response functions in a reaction time of 25 min. Weights of the mentioned responses as well as cost criteria were determined by AHP model based on pairwise comparison and then used as inputs to PROMETHEE method to develop hybrid AHP-PROMETHEE. Based on net flow results of this hybrid model, FeCl 2 ·4H 2 O was more efficient because of its less environmental stability as well as lower SIR production. Then, optimization of experiments using Central Composite Design (CCD) under RSM was performed with the FeCl 2 ·4H 2 O catalyst. Biodegradability of wastewater was determined in terms of BOD 5 /COD ratio, showing that HFO process is able to improve wastewater biodegradability from zero to 0.42. Finally, the main intermediaries of degradation and degradation pathways of CIP were investigated with (HPLC-MS). Major degradation pathways from hydroxylation of both piperazine and quinolonic rings, oxidation and cleavage of the piperazine ring, and defluorination (OH/F substitution) were suggested. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Managing Input during Assistive Technology Product Design

    ERIC Educational Resources Information Center

    Choi, Young Mi

    2011-01-01

    Many different sources of input are available to assistive technology innovators during the course of designing products. However, there is little information on which ones may be most effective or how they may be efficiently utilized within the design process. The aim of this project was to compare how three types of input--from simulation tools,…

  13. The Effects of Input-Enhanced Instruction on Iranian EFL Learners' Production of Appropriate and Accurate Suggestions

    ERIC Educational Resources Information Center

    Ghavamnia, M.; Eslami-Rasekh, A.; Vahid Dastjerdi, H.

    2018-01-01

    This study investigates the relative effectiveness of four types of input-enhanced instruction on the development of Iranian EFL learners' production of pragmatically appropriate and grammatically accurate suggestions. Over a 16-week course, input delivered through video clips was enhanced differently in four intact classes: (1) metapragmatic…

  14. Quantifying methane emissions from natural gas production in north-eastern Pennsylvania

    NASA Astrophysics Data System (ADS)

    Barkley, Zachary R.; Lauvaux, Thomas; Davis, Kenneth J.; Deng, Aijun; Miles, Natasha L.; Richardson, Scott J.; Cao, Yanni; Sweeney, Colm; Karion, Anna; Smith, MacKenzie; Kort, Eric A.; Schwietzke, Stefan; Murphy, Thomas; Cervone, Guido; Martins, Douglas; Maasakkers, Joannes D.

    2017-11-01

    Natural gas infrastructure releases methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated emission rate associated with the production and transportation of natural gas is uncertain, hindering our understanding of its greenhouse footprint. This study presents a new application of inverse methodology for estimating regional emission rates from natural gas production and gathering facilities in north-eastern Pennsylvania. An inventory of CH4 emissions was compiled for major sources in Pennsylvania. This inventory served as input emission data for the Weather Research and Forecasting model with chemistry enabled (WRF-Chem), and atmospheric CH4 mole fraction fields were generated at 3 km resolution. Simulated atmospheric CH4 enhancements from WRF-Chem were compared to observations obtained from a 3-week flight campaign in May 2015. Modelled enhancements from sources not associated with upstream natural gas processes were assumed constant and known and therefore removed from the optimization procedure, creating a set of observed enhancements from natural gas only. Simulated emission rates from unconventional production were then adjusted to minimize the mismatch between aircraft observations and model-simulated mole fractions for 10 flights. To evaluate the method, an aircraft mass balance calculation was performed for four flights where conditions permitted its use. Using the model optimization approach, the weighted mean emission rate from unconventional natural gas production and gathering facilities in north-eastern Pennsylvania approach is found to be 0.36 % of total gas production, with a 2σ confidence interval between 0.27 and 0.45 % of production. Similarly, the mean emission estimates using the aircraft mass balance approach are calculated to be 0.40 % of regional natural gas production, with a 2σ confidence interval between 0.08 and 0.72 % of production. These emission rates as a percent of production are lower than rates found in any other basin using a top-down methodology, and may be indicative of some characteristics of the basin that make sources from the north-eastern Marcellus region unique.

  15. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  16. Power matching between plasma generation and electrostatic acceleration in helicon electrostatic thruster

    NASA Astrophysics Data System (ADS)

    Ichihara, D.; Nakagawa, Y.; Uchigashima, A.; Iwakawa, A.; Sasoh, A.; Yamazaki, T.

    2017-10-01

    The effects of a radio-frequency (RF) power on the ion generation and electrostatic acceleration in a helicon electrostatic thruster were investigated with a constant discharge voltage of 300 V using argon as the working gas at a flow rate either of 0.5 Aeq (Ampere equivalent) or 1.0 Aeq. A RF power that was even smaller than a direct-current (DC) discharge power enhanced the ionization of the working gas, thereby both the ion beam current and energy were increased. However, an excessively high RF power input resulted in their saturation, leading to an unfavorable increase in an ionization cost with doubly charged ion production being accompanied. From the tradeoff between the ion production by the RF power and the electrostatic acceleration made by the direct current discharge power, the thrust efficiency has a maximum value at an optimal RF to DC discharge power ratio of 0.6 - 1.0.

  17. Combining high biodiversity with high yields in tropical agroforests.

    PubMed

    Clough, Yann; Barkmann, Jan; Juhrbandt, Jana; Kessler, Michael; Wanger, Thomas Cherico; Anshary, Alam; Buchori, Damayanti; Cicuzza, Daniele; Darras, Kevin; Putra, Dadang Dwi; Erasmi, Stefan; Pitopang, Ramadhanil; Schmidt, Carsten; Schulze, Christian H; Seidel, Dominik; Steffan-Dewenter, Ingolf; Stenchly, Kathrin; Vidal, Stefan; Weist, Maria; Wielgoss, Arno Christian; Tscharntke, Teja

    2011-05-17

    Local and landscape-scale agricultural intensification is a major driver of global biodiversity loss. Controversially discussed solutions include wildlife-friendly farming or combining high-intensity farming with land-sparing for nature. Here, we integrate biodiversity and crop productivity data for smallholder cacao in Indonesia to exemplify for tropical agroforests that there is little relationship between yield and biodiversity under current management, opening substantial opportunities for wildlife-friendly management. Species richness of trees, fungi, invertebrates, and vertebrates did not decrease with yield. Moderate shade, adequate labor, and input level can be combined with a complex habitat structure to provide high biodiversity as well as high yields. Although livelihood impacts are held up as a major obstacle for wildlife-friendly farming in the tropics, our results suggest that in some situations, agroforests can be designed to optimize both biodiversity and crop production benefits without adding pressure to convert natural habitat to farmland.

  18. Combining high biodiversity with high yields in tropical agroforests

    PubMed Central

    Clough, Yann; Barkmann, Jan; Juhrbandt, Jana; Kessler, Michael; Wanger, Thomas Cherico; Anshary, Alam; Buchori, Damayanti; Cicuzza, Daniele; Darras, Kevin; Putra, Dadang Dwi; Erasmi, Stefan; Pitopang, Ramadhanil; Schmidt, Carsten; Schulze, Christian H.; Seidel, Dominik; Steffan-Dewenter, Ingolf; Stenchly, Kathrin; Vidal, Stefan; Weist, Maria; Wielgoss, Arno Christian; Tscharntke, Teja

    2011-01-01

    Local and landscape-scale agricultural intensification is a major driver of global biodiversity loss. Controversially discussed solutions include wildlife-friendly farming or combining high-intensity farming with land-sparing for nature. Here, we integrate biodiversity and crop productivity data for smallholder cacao in Indonesia to exemplify for tropical agroforests that there is little relationship between yield and biodiversity under current management, opening substantial opportunities for wildlife-friendly management. Species richness of trees, fungi, invertebrates, and vertebrates did not decrease with yield. Moderate shade, adequate labor, and input level can be combined with a complex habitat structure to provide high biodiversity as well as high yields. Although livelihood impacts are held up as a major obstacle for wildlife-friendly farming in the tropics, our results suggest that in some situations, agroforests can be designed to optimize both biodiversity and crop production benefits without adding pressure to convert natural habitat to farmland. PMID:21536873

  19. Topology and boundary shape optimization as an integrated design tool

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin Philip; Rodrigues, Helder Carrico

    1990-01-01

    The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.

  20. Efficient dynamic optimization of logic programs

    NASA Technical Reports Server (NTRS)

    Laird, Phil

    1992-01-01

    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.

  1. INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE

    PubMed Central

    Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval

    2008-01-01

    SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077

  2. Estimated anthropogenic nitrogen and phosphorus inputs to the land surface of the conterminous United States--1992, 1997, and 2002

    USGS Publications Warehouse

    Sprague, Lori A.; Gronberg, Jo Ann M.

    2013-01-01

    Anthropogenic inputs of nitrogen and phosphorus to each county in the conterminous United States and to the watersheds of 495 surface-water sites studied as part of the U.S. Geological Survey National Water-Quality Assessment Program were quantified for the years 1992, 1997, and 2002. Estimates of inputs of nitrogen and phosphorus from biological fixation by crops (for nitrogen only), human consumption, crop production for human consumption, animal production for human consumption, animal consumption, and crop production for animal consumption for each county are provided in a tabular dataset. These county-level estimates were allocated to the watersheds of the surface-water sites to estimate watershed-level inputs from the same sources; these estimates also are provided in a tabular dataset, together with calculated estimates of net import of food and net import of feed and previously published estimates of inputs from atmospheric deposition, fertilizer, and recoverable manure. The previously published inputs are provided for each watershed so that final estimates of total anthropogenic nutrient inputs could be calculated. Estimates of total anthropogenic inputs are presented together with previously published estimates of riverine loads of total nitrogen and total phosphorus for reference.

  3. Atmospheric mercury footprints of nations.

    PubMed

    Liang, Sai; Wang, Yafei; Cinnirella, Sergio; Pirrone, Nicola

    2015-03-17

    The Minamata Convention was established to protect humans and the natural environment from the adverse effects of mercury emissions. A cogent assessment of mercury emissions is required to help implement the Minamata Convention. Here, we use an environmentally extended multi-regional input-output model to calculate atmospheric mercury footprints of nations based on upstream production (meaning direct emissions from the production activities of a nation), downstream production (meaning both direct and indirect emissions caused by the production activities of a nation), and consumption (meaning both direct and indirect emissions caused by final consumption of goods and services in a nation). Results show that nations function differently within global supply chains. Developed nations usually have larger consumption-based emissions than up- and downstream production-based emissions. India, South Korea, and Taiwan have larger downstream production-based emissions than their upstream production- and consumption-based emissions. Developed nations (e.g., United States, Japan, and Germany) are in part responsible for mercury emissions of developing nations (e.g., China, India, and Indonesia). Our findings indicate that global mercury abatement should focus on multiple stages of global supply chains. We propose three initiatives for global mercury abatement, comprising the establishment of mercury control technologies of upstream producers, productivity improvement of downstream producers, and behavior optimization of final consumers.

  4. Prioritizing Crop Management to Increase Nitrogen Use Efficiency in Australian Sugarcane Crops.

    PubMed

    Thorburn, Peter J; Biggs, Jody S; Palmer, Jeda; Meier, Elizabeth A; Verburg, Kirsten; Skocaj, Danielle M

    2017-01-01

    Sugarcane production relies on the application of large amounts of nitrogen (N) fertilizer. However, application of N in excess of crop needs can lead to loss of N to the environment, which can negatively impact ecosystems. This is of particular concern in Australia where the majority of sugarcane is grown within catchments that drain directly into the World Heritage listed Great Barrier Reef Marine Park. Multiple factors that impact crop yield and N inputs of sugarcane production systems can affect N use efficiency (NUE), yet the efficacy many of these factors have not been examined in detail. We undertook an extensive simulation analysis of NUE in Australian sugarcane production systems to investigate (1) the impacts of climate on factors determining NUE, (2) the range and drivers of NUE, and (3) regional variation in sugarcane N requirements. We found that the interactions between climate, soils, and management produced a wide range of simulated NUE, ranging from ∼0.3 Mg cane (kg N) -1 , where yields were low (i.e., <50 Mg ha -1 ) and N inputs were high, to >5 Mg cane (kg N) -1 in plant crops where yields were high and N inputs low. Of the management practices simulated (N fertilizer rate, timing, and splitting; fallow management; tillage intensity; and in-field traffic management), the only practice that significantly influenced NUE in ratoon crops was N fertilizer application rate. N rate also influenced NUE in plant crops together with the management of the preceding fallow. In addition, there is regional variation in N fertilizer requirement that could make N fertilizer recommendations more specific. While our results show that complex interrelationships exist between climate, crop growth, N fertilizer rates and N losses to the environment, they highlight the priority that should be placed on optimizing N application rate and fallow management to improve NUE in Australian sugarcane production systems. New initiatives in seasonal climate forecasting, decisions support systems and enhanced efficiency fertilizers have potential for making N fertilizer management more site specific, an action that should facilitate increased NUE.

  5. Prioritizing Crop Management to Increase Nitrogen Use Efficiency in Australian Sugarcane Crops

    PubMed Central

    Thorburn, Peter J.; Biggs, Jody S.; Palmer, Jeda; Meier, Elizabeth A.; Verburg, Kirsten; Skocaj, Danielle M.

    2017-01-01

    Sugarcane production relies on the application of large amounts of nitrogen (N) fertilizer. However, application of N in excess of crop needs can lead to loss of N to the environment, which can negatively impact ecosystems. This is of particular concern in Australia where the majority of sugarcane is grown within catchments that drain directly into the World Heritage listed Great Barrier Reef Marine Park. Multiple factors that impact crop yield and N inputs of sugarcane production systems can affect N use efficiency (NUE), yet the efficacy many of these factors have not been examined in detail. We undertook an extensive simulation analysis of NUE in Australian sugarcane production systems to investigate (1) the impacts of climate on factors determining NUE, (2) the range and drivers of NUE, and (3) regional variation in sugarcane N requirements. We found that the interactions between climate, soils, and management produced a wide range of simulated NUE, ranging from ∼0.3 Mg cane (kg N)-1, where yields were low (i.e., <50 Mg ha-1) and N inputs were high, to >5 Mg cane (kg N)-1 in plant crops where yields were high and N inputs low. Of the management practices simulated (N fertilizer rate, timing, and splitting; fallow management; tillage intensity; and in-field traffic management), the only practice that significantly influenced NUE in ratoon crops was N fertilizer application rate. N rate also influenced NUE in plant crops together with the management of the preceding fallow. In addition, there is regional variation in N fertilizer requirement that could make N fertilizer recommendations more specific. While our results show that complex interrelationships exist between climate, crop growth, N fertilizer rates and N losses to the environment, they highlight the priority that should be placed on optimizing N application rate and fallow management to improve NUE in Australian sugarcane production systems. New initiatives in seasonal climate forecasting, decisions support systems and enhanced efficiency fertilizers have potential for making N fertilizer management more site specific, an action that should facilitate increased NUE. PMID:28928756

  6. Optimal wavelength-space crossbar switches for supercomputer optical interconnects.

    PubMed

    Roudas, Ioannis; Hemenway, B Roe; Grzybowski, Richard R; Karinou, Fotini

    2012-08-27

    We propose a most economical design of the Optical Shared MemOry Supercomputer Interconnect System (OSMOSIS) all-optical, wavelength-space crossbar switch fabric. It is shown, by analysis and simulation, that the total number of on-off gates required for the proposed N × N switch fabric can scale asymptotically as N ln N if the number of input/output ports N can be factored into a product of small primes. This is of the same order of magnitude as Shannon's lower bound for switch complexity, according to which the minimum number of two-state switches required for the construction of a N × N permutation switch is log2 (N!).

  7. Optimization Strategies for Hardware-Based Cofactorization

    NASA Astrophysics Data System (ADS)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  8. C-SWAT: The Soil and Water Assessment Tool with consolidated input files in alleviating computational burden of recursive simulations

    USDA-ARS?s Scientific Manuscript database

    The temptation to include model parameters and high resolution input data together with the availability of powerful optimization and uncertainty analysis algorithms has significantly enhanced the complexity of hydrologic and water quality modeling. However, the ability to take advantage of sophist...

  9. Multi-Response Optimization of WEDM Process Parameters Using Taguchi Based Desirability Function Analysis

    NASA Astrophysics Data System (ADS)

    Majumder, Himadri; Maity, Kalipada

    2018-03-01

    Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.

  10. Method of generating features optimal to a dataset and classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  11. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  12. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  13. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  14. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  15. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  16. A single-layer platform for Boolean logic and arithmetic through DNA excision in mammalian cells

    PubMed Central

    Weinberg, Benjamin H.; Hang Pham, N. T.; Caraballo, Leidy D.; Lozanoski, Thomas; Engel, Adrien; Bhatia, Swapnil; Wong, Wilson W.

    2017-01-01

    Genetic circuits engineered for mammalian cells often require extensive fine-tuning to perform their intended functions. To overcome this problem, we present a generalizable biocomputing platform that can engineer genetic circuits which function in human cells with minimal optimization. We used our Boolean Logic and Arithmetic through DNA Excision (BLADE) platform to build more than 100 multi-input-multi-output circuits. We devised a quantitative metric to evaluate the performance of the circuits in human embryonic kidney and Jurkat T cells. Of 113 circuits analysed, 109 functioned (96.5%) with the correct specified behavior without any optimization. We used our platform to build a three-input, two-output Full Adder and six-input, one-output Boolean Logic Look Up Table. We also used BLADE to design circuits with temporal small molecule-mediated inducible control and circuits that incorporate CRISPR/Cas9 to regulate endogenous mammalian genes. PMID:28346402

  17. Control design methods for floating wind turbines for optimal disturbance rejection

    NASA Astrophysics Data System (ADS)

    Lemmer, Frank; Schlipf, David; Cheng, Po Wen

    2016-09-01

    An analysis of the floating wind turbine as a multi-input-multi-output system investigating the effect of the control inputs on the system outputs is shown. These effects are compared to the ones of the disturbances from wind and waves in order to give insights for the selection of the control layout. The frequencies with the largest impact on the outputs due to limited effect of the controlled variables are identified. Finally, an optimal controller is designed as a benchmark and compared to a conventional PI-controller using only the rotor speed as input. Here, the previously found system properties, especially the difficulties to damp responses to wave excitation, are confirmed and verified through a spectral analysis with realistic environmental conditions. This comparison also assesses the quality of the employed simplified linear simulation model compared to the nonlinear model and shows that such an efficient frequency-domain evaluation for control design is feasible.

  18. An exact algebraic solution of the infimum in H-infinity optimization with output feedback

    NASA Technical Reports Server (NTRS)

    Chen, Ben M.; Saberi, Ali; Ly, Uy-Loi

    1991-01-01

    This paper presents a simple and noniterative procedure for the computation of the exact value of the infimum in the standard H-infinity-optimal control with output feedback. The problem formulation is general and does not place any restrictions on the direct feedthrough terms between the control input and the controlled output variables, and between the disturbance input and the measurement output variables. The method is applicable to systems that satisfy (1) the transfer function from the control input to the controlled output is right-invertible and has no invariant zeros on the j(w) axis and, (2) the transfer function from the disturbance to the measurement output is left-invertible and has no invariant zeros on the j(w) axis. A set of necessary and sufficient conditions for the solvability of H-infinity-almost disturbance decoupling problem via measurement feedback with internal stability is also given.

  19. Optimizing information flow in small genetic networks. IV. Spatial coupling

    NASA Astrophysics Data System (ADS)

    Sokolowski, Thomas R.; Tkačik, Gašper

    2015-06-01

    We typically think of cells as responding to external signals independently by regulating their gene expression levels, yet they often locally exchange information and coordinate. Can such spatial coupling be of benefit for conveying signals subject to gene regulatory noise? Here we extend our information-theoretic framework for gene regulation to spatially extended systems. As an example, we consider a lattice of nuclei responding to a concentration field of a transcriptional regulator (the input) by expressing a single diffusible target gene. When input concentrations are low, diffusive coupling markedly improves information transmission; optimal gene activation functions also systematically change. A qualitatively different regulatory strategy emerges where individual cells respond to the input in a nearly steplike fashion that is subsequently averaged out by strong diffusion. While motivated by early patterning events in the Drosophila embryo, our framework is generically applicable to spatially coupled stochastic gene expression models.

  20. Growth promotion and colonization of switchgrass (Panicum virgatum) cv. Alamo by bacterial endophyte Burkholderia phytofirmans strain PsJN

    PubMed Central

    2012-01-01

    Background Switchgrass is one of the most promising bioenergy crop candidates for the US. It gives relatively high biomass yield and can grow on marginal lands. However, its yields vary from year to year and from location to location. Thus it is imperative to develop a low input and sustainable switchgrass feedstock production system. One of the most feasible ways to increase biomass yields is to harness benefits of microbial endophytes. Results We demonstrate that one of the most studied plant growth promoting bacterial endophytes, Burkholderia phytofirmans strain PsJN, is able to colonize and significantly promote growth of switchgrass cv. Alamo under in vitro, growth chamber, and greenhouse conditions. In several in vitro experiments, the average fresh weight of PsJN-inoculated plants was approximately 50% higher than non-inoculated plants. When one-month-old seedlings were grown in a growth chamber for 30 days, the PsJN-inoculated Alamo plants had significantly higher shoot and root biomass compared to controls. Biomass yield (dry weight) averaged from five experiments was 54.1% higher in the inoculated treatment compared to non-inoculated control. Similar results were obtained in greenhouse experiments with transplants grown in 4-gallon pots for two months. The inoculated plants exhibited more early tillers and persistent growth vigor with 48.6% higher biomass than controls. We also found that PsJN could significantly promote growth of switchgrass cv. Alamo under sub-optimal conditions. However, PsJN-mediated growth promotion in switchgrass is genotype specific. Conclusions Our results show B. phytofirmans strain PsJN significantly promotes growth of switchgrass cv. Alamo under different conditions, especially in the early growth stages leading to enhanced production of tillers. This phenomenon may benefit switchgrass establishment in the first year. Moreover, PsJN significantly stimulated growth of switchgrass cv. Alamo under sub-optimal conditions, indicating that the use of the beneficial bacterial endophytes may boost switchgrass growth on marginal lands and significantly contribute to the development of a low input and sustainable feedstock production system. PMID:22647367

  1. Is the economic value of hydrological forecasts related to their quality? Case study of the hydropower sector.

    NASA Astrophysics Data System (ADS)

    Cassagnole, Manon; Ramos, Maria-Helena; Thirel, Guillaume; Gailhard, Joël; Garçon, Rémy

    2017-04-01

    The improvement of a forecasting system and the evaluation of the quality of its forecasts are recurrent steps in operational practice. However, the evaluation of forecast value or forecast usefulness for better decision-making is, to our knowledge, less frequent, even if it might be essential in many sectors such as hydropower and flood warning. In the hydropower sector, forecast value can be quantified by the economic gain obtained with the optimization of operations or reservoir management rules. Several hydropower operational systems use medium-range forecasts (up to 7-10 days ahead) and energy price predictions to optimize hydropower production. Hence, the operation of hydropower systems, including the management of water in reservoirs, is impacted by weather, climate and hydrologic variability as well as extreme events. In order to assess how the quality of hydrometeorological forecasts impact operations, it is essential to first understand if and how operations and management rules are sensitive to input predictions of different quality. This study investigates how 7-day ahead deterministic and ensemble streamflow forecasts of different quality might impact the economic gains of energy production. It is based on a research model developed by Irstea and EDF to investigate issues relevant to the links between quality and value of forecasts in the optimisation of energy production at the short range. Based on streamflow forecasts and pre-defined management constraints, the model defines the best hours (i.e., the hours with high energy prices) to produce electricity. To highlight the link between forecasts quality and their economic value, we built several synthetic ensemble forecasts based on observed streamflow time series. These inputs are generated in a controlled environment in order to obtain forecasts of different quality in terms of accuracy and reliability. These forecasts are used to assess the sensitivity of the decision model to forecast quality. Relationships between forecast quality and economic value are discussed. This work is part of the IMPREX project, a research project supported by the European Commission under the Horizon 2020 Framework programme, with grant No. 641811 (http://www.imprex.eu)

  2. Bone marrow niche-inspired, multi-phase expansion of megakaryocytic progenitors with high polyploidization potential

    PubMed Central

    Panuganti, Swapna; Papoutsakis, Eleftherios T.; Miller, William M.

    2010-01-01

    Background Megakaryopoiesis encompasses hematopoietic stem and progenitor cell (HSPC) commitment to the megakaryocytic cell (Mk) lineage, expansion of Mk progenitors and mature Mks, polyploidization, and platelet release. pH and pO2 increase from the endosteum to sinuses, and different cytokines are important for various stages of differentiation. We hypothesized that mimicking the changing conditions during Mk differentiation in the bone marrow would facilitate expansion of progenitors that could generate many high-ploidy Mks. Methods CD34+ HSPCs were cultured at pH 7.2 and 5% O2 with stem cell factor (SCF), thrombopoietin (Tpo), and all combinations of Interleukin (IL)-3, IL-6, IL-11, and Flt-3 ligand to promote Mk progenitor expansion. Cells cultured with selected cytokines were shifted to pH 7.4 and 20% O2 to generate mature Mks, and treated with nicotinamide to enhance polyploidization. Results Using Tpo+SCF+IL-3+IL-11, we obtained 3.5 CD34+CD41+ Mk progenitors per input HSPC, while increasing purity from 1% to 17%. Cytokine cocktails with IL-3 yielded more progenitors and mature Mks, although the purities were lower. Mk production was much greater at higher pH and pO2. Although fewer progenitors were present, shifting to 20% O2/pH 7.4 at day 5 (versus days 7 or 9) yielded the greatest mature Mk production, 14 per input HSPC. Nicotinamide more than doubled the percentage of high-ploidy Mks to 40%. Discussion We obtained extensive Mk progenitor expansion, while ensuring that the progenitors could produce high-ploidy Mks. We anticipate that subsequent optimization of cytokines for mature Mk production and delayed nicotinamide addition will greatly increase high-ploidy Mk production. PMID:20482285

  3. Bone marrow niche-inspired, multiphase expansion of megakaryocytic progenitors with high polyploidization potential.

    PubMed

    Panuganti, Swapna; Papoutsakis, Eleftherios T; Miller, William M

    2010-10-01

    Megakaryopoiesis encompasses hematopoietic stem and progenitor cell (HSPC) commitment to the megakaryocytic cell (Mk) lineage, expansion of Mk progenitors and mature Mks, polyploidization and platelet release. pH and pO2 increase from the endosteum to sinuses, and different cytokines are important for various stages of differentiation. We hypothesized that mimicking the changing conditions during Mk differentiation in the bone marrow would facilitate expansion of progenitors that could generate many high-ploidy Mks. CD34+ HSPCs were cultured at pH 7.2 and 5% O2 with stem cell factor (SCF), thrombopoietin (Tpo) and all combinations of Interleukin (IL)-3, IL-6, IL-11 and Flt-3 ligand to promote Mk progenitor expansion. Cells cultured with selected cytokines were shifted to pH 7.4 and 20% O2 to generate mature Mks, and treated with nicotinamide (NIC) to enhance polyploidization. Using Tpo + SCF + IL-3 + IL-11, we obtained 3.5 CD34+ CD41+ Mk progenitors per input HSPC, while increasing purity from 1% to 17%. Cytokine cocktails with IL-3 yielded more progenitors and mature Mks, although the purities were lower. Mk production was much greater at higher pH and pO2. Although fewer progenitors were present, shifting to 20% O2 /pH 7.4 at day 5 (versus days 7 or 9) yielded the greatest mature Mk production, 14 per input HSPC. NIC more than doubled the percentage of high-ploidy Mks to 40%. We obtained extensive Mk progenitor expansion, while ensuring that the progenitors could produce high-ploidy Mks. We anticipate that subsequent optimization of cytokines for mature Mk production and delayed NIC addition will greatly increase high-ploidy Mk production.

  4. Real Time Voltage and Current Phase Shift Analyzer for Power Saving Applications

    PubMed Central

    Krejcar, Ondrej; Frischer, Robert

    2012-01-01

    Nowadays, high importance is given to low energy devices (such as refrigerators, deep-freezers, washing machines, pumps, etc.) that are able to produce reactive power in power lines which can be optimized (reduced). Reactive power is the main component which overloads power lines and brings excessive thermal stress to conductors. If the reactive power is optimized, it can significantly lower the electricity consumption (from 10 to 30%—varies between countries). This paper will examine and discuss the development of a measuring device for analyzing reactive power. However, the main problem is the precise real time measurement of the input and output voltage and current. Such quality measurement is needed to allow adequate action intervention (feedback which reduces or fully compensates reactive power). Several other issues, such as the accuracy and measurement speed, must be examined while designing this device. The price and the size of the final product need to remain low as they are the two important parameters of this solution. PMID:23112662

  5. Optimal savings and the value of population.

    PubMed

    Arrow, Kenneth J; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P

    2007-11-20

    We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium.

  6. Optimal savings and the value of population

    PubMed Central

    Arrow, Kenneth J.; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P.

    2007-01-01

    We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium. PMID:17984059

  7. Framework GRASP: routine library for optimize processing of aerosol remote sensing observation

    NASA Astrophysics Data System (ADS)

    Fuertes, David; Torres, Benjamin; Dubovik, Oleg; Litvinov, Pavel; Lapyonok, Tatyana; Ducos, Fabrice; Aspetsberger, Michael; Federspiel, Christian

    The present the development of a Framework for the Generalized Retrieval of Aerosol and Surface Properties (GRASP) developed by Dubovik et al., (2011). The framework is a source code project that attempts to strengthen the value of the GRASP inversion algorithm by transforming it into a library that will be used later for a group of customized application modules. The functions of the independent modules include the managing of the configuration of the code execution, as well as preparation of the input and output. The framework provides a number of advantages in utilization of the code. First, it implements loading data to the core of the scientific code directly from memory without passing through intermediary files on disk. Second, the framework allows consecutive use of the inversion code without the re-initiation of the core routine when new input is received. These features are essential for optimizing performance of the data production in processing of large observation sets, such as satellite images by the GRASP. Furthermore, the framework is a very convenient tool for further development, because this open-source platform is easily extended for implementing new features. For example, it could accommodate loading of raw data directly onto the inversion code from a specific instrument not included in default settings of the software. Finally, it will be demonstrated that from the user point of view, the framework provides a flexible, powerful and informative configuration system.

  8. Optimization of replacement and inspection decisions for multiple components on a power system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauney, D.A.

    1994-12-31

    The use of optimization on the rescheduling of replacement dates provided a very proactive approach to deciding when components on individual units need to be addressed with a run/repair/replace decision. Including the effects of time value of money and taxes and unit need inside the spreadsheet model allowed the decision maker to concentrate on the effects of engineering input and replacement date decisions on the final net present value (NPV). The personal computer (PC)-based model was applied to a group of 140 forced outage critical fossil plant tube components across a power system. The estimated resulting NPV of the optimizationmore » was in the tens of millions of dollars. This PC spreadsheet model allows the interaction of inputs from structural reliability risk assessment models, plant foreman interviews, and actual failure history on a by component by unit basis across a complete power production system. This model includes not only the forced outage performance of these components caused by tube failures but, in addition, the forecasted need of the individual units on the power system and the expected cost of their replacement power if forced off line. The use of cash flow analysis techniques in the spreadsheet model results in the calculation of an NPV for a whole combination of replacement dates. This allows rapid assessments of {open_quotes}what if{close_quotes} scenarios of major maintenance projects on a systemwide basis and not just on a unit-by-unit basis.« less

  9. Teleportation of squeezing: Optimization using non-Gaussian resources

    NASA Astrophysics Data System (ADS)

    Dell'Anno, Fabio; de Siena, Silvio; Adesso, Gerardo; Illuminati, Fabrizio

    2010-12-01

    We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell’Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.76.022301 76, 022301 (2007); F. Dell’Anno, S. De Siena, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.81.012333 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.

  10. Predicting the Direction of Stock Market Index Movement Using an Optimized Artificial Neural Network Model.

    PubMed

    Qiu, Mingyue; Song, Yu

    2016-01-01

    In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders' expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day's price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately.

  11. Predicting the Direction of Stock Market Index Movement Using an Optimized Artificial Neural Network Model

    PubMed Central

    Qiu, Mingyue; Song, Yu

    2016-01-01

    In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders’ expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day’s price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately. PMID:27196055

  12. Improving nitrogen management via a regional management plan for Chinese rice production

    NASA Astrophysics Data System (ADS)

    Wu, Liang; Chen, Xinping; Cui, Zhenling; Wang, Guiliang; Zhang, Weifeng

    2015-09-01

    A lack of basic information on optimal nitrogen (N) management often results in over- or under-application of N fertilizer in small-scale intensive rice farming. Here, we present a new database of N input from a survey of 6611 small-scale rice farmers and rice yield in response to added N in 1177 experimental on-farm tests across eight agroecological subregions of China. This database enables us to evaluate N management by farmers and develop an optimal approach to regional N management. We also investigated grain yield, N application rate, and estimated greenhouse gas (GHG) emissions in comparison to N application and farming practices. Across all farmers, the average N application rate, weighted by the area of rice production in each subregion, was 210 kg ha-1 and ranged from 30 to 744 kg ha-1 across fields and from 131 to 316 kg ha-1 across regions. The regionally optimal N rate (RONR) determined from the experiments averaged 167 kg ha-1 and varied from 114 to 224 kg N ha-1 for the different regions. If these RONR were widely adopted in China, approximately 56% of farms would reduce their use of N fertilizer, and approximately 33% would increase their use of N fertilizer. As a result, grain yield would increase by 7.4% from 7.14 to 7.67 Mg ha-1, and the estimated GHG emissions would be reduced by 11.1% from 1390 to 1236 kg carbon dioxide (CO2) eq Mg-1 grain. These results suggest that to achieve the goals of improvement in regional yield and sustainable environmental development, regional N use should be optimized among N-poor and N-rich farms and regions in China.

  13. A critical analysis of species selection and high vs. low-input silviculture on establishment success and early productivity of model short-rotation wood-energy cropping systems

    DOE PAGES

    Fischer, M.; Kelley, A. M.; Ward, E. J.; ...

    2017-02-03

    Most research on bioenergy short rotation woody crops (SRWC) has been dedicated to the genera Populus and Salix. These species generally require relatively high-input culture, including intensive weed competition control, which increases costs and environmental externalities. Widespread native early successional species, characterized by high productivity and good coppicing ability, may be better adapted to local environmental stresses and therefore could offer alternative low-input bioenergy production systems. In order to test this concept, we established a three-year experiment comparing a widely-used hybrid poplar (Populus nigra × P. maximowiczii, clone ‘NM6’) to two native species, American sycamore (Platanus occidentalis L.) and tuliptreemore » (Liriodendron tulipifera L.) grown under contrasting weed and pest control at a coastal plain site in eastern North Carolina, USA. Mean cumulative aboveground wood production was significantly greater in sycamore, with yields of 46.6 Mg ha -11 under high-inputs and 32.7 Mg ha -1 under low-input culture, which rivaled the high-input NM6 yield of 32.9 Mg ha -1. NM6 under low-input management provided noncompetitive yield of 6.2 Mg ha -1. We also found that sycamore showed superiority in survival, biomass increment, weed resistance, treatment convergence, and within-stand uniformity. All are important characteristics for a bioenergy feedstock crop species, leading to reliable establishment and efficient biomass production. Poor performance in all traits was found for tuliptree, with a maximum yield of 1.2 Mg ha -1, suggesting this native species is a poor choice for SRWC. We then conclude that careful species selection beyond the conventionally used genera may enhance reliability and decrease negative environmental impacts of the bioenergy biomass production sector.« less

  14. A critical analysis of species selection and high vs. low-input silviculture on establishment success and early productivity of model short-rotation wood-energy cropping systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, M.; Kelley, A. M.; Ward, E. J.

    Most research on bioenergy short rotation woody crops (SRWC) has been dedicated to the genera Populus and Salix. These species generally require relatively high-input culture, including intensive weed competition control, which increases costs and environmental externalities. Widespread native early successional species, characterized by high productivity and good coppicing ability, may be better adapted to local environmental stresses and therefore could offer alternative low-input bioenergy production systems. In order to test this concept, we established a three-year experiment comparing a widely-used hybrid poplar (Populus nigra × P. maximowiczii, clone ‘NM6’) to two native species, American sycamore (Platanus occidentalis L.) and tuliptreemore » (Liriodendron tulipifera L.) grown under contrasting weed and pest control at a coastal plain site in eastern North Carolina, USA. Mean cumulative aboveground wood production was significantly greater in sycamore, with yields of 46.6 Mg ha -11 under high-inputs and 32.7 Mg ha -1 under low-input culture, which rivaled the high-input NM6 yield of 32.9 Mg ha -1. NM6 under low-input management provided noncompetitive yield of 6.2 Mg ha -1. We also found that sycamore showed superiority in survival, biomass increment, weed resistance, treatment convergence, and within-stand uniformity. All are important characteristics for a bioenergy feedstock crop species, leading to reliable establishment and efficient biomass production. Poor performance in all traits was found for tuliptree, with a maximum yield of 1.2 Mg ha -1, suggesting this native species is a poor choice for SRWC. We then conclude that careful species selection beyond the conventionally used genera may enhance reliability and decrease negative environmental impacts of the bioenergy biomass production sector.« less

  15. Optimal reorientation of asymmetric underactuated spacecraft using differential flatness and receding horizon control

    NASA Astrophysics Data System (ADS)

    Cai, Wei-wei; Yang, Le-ping; Zhu, Yan-wei

    2015-01-01

    This paper presents a novel method integrating nominal trajectory optimization and tracking for the reorientation control of an underactuated spacecraft with only two available control torque inputs. By employing a pseudo input along the uncontrolled axis, the flatness property of a general underactuated spacecraft is extended explicitly, by which the reorientation trajectory optimization problem is formulated into the flat output space with all the differential constraints eliminated. Ultimately, the flat output optimization problem is transformed into a nonlinear programming problem via the Chebyshev pseudospectral method, which is improved by the conformal map and barycentric rational interpolation techniques to overcome the side effects of the differential matrix's ill-conditions on numerical accuracy. Treating the trajectory tracking control as a state regulation problem, we develop a robust closed-loop tracking control law using the receding-horizon control method, and compute the feedback control at each control cycle rapidly via the differential transformation method. Numerical simulation results show that the proposed control scheme is feasible and effective for the reorientation maneuver.

  16. Efficient design of gain-flattened multi-pump Raman fiber amplifiers using least squares support vector regression

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao

    2018-02-01

    An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.

  17. Multi-input multioutput orthogonal frequency division multiplexing radar waveform design for improving the detection performance of space-time adaptive processing

    NASA Astrophysics Data System (ADS)

    Wang, Hongyan

    2017-04-01

    This paper addresses the waveform optimization problem for improving the detection performance of multi-input multioutput (MIMO) orthogonal frequency division multiplexing (OFDM) radar-based space-time adaptive processing (STAP) in the complex environment. By maximizing the output signal-to-interference-and-noise-ratio (SINR) criterion, the waveform optimization problem for improving the detection performance of STAP, which is subjected to the constant modulus constraint, is derived. To tackle the resultant nonlinear and complicated optimization issue, a diagonal loading-based method is proposed to reformulate the issue as a semidefinite programming one; thereby, this problem can be solved very efficiently. In what follows, the optimized waveform can be obtained to maximize the output SINR of MIMO-OFDM such that the detection performance of STAP can be improved. The simulation results show that the proposed method can improve the output SINR detection performance considerably as compared with that of uncorrelated waveforms and the existing MIMO-based STAP method.

  18. Investigation, development and application of optimal output feedback theory. Volume 2: Development of an optimal, limited state feedback outer-loop digital flight control system for 3-D terminal area operation

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.; Halyo, N.

    1984-01-01

    This report contains the development of a digital outer-loop three dimensional radio navigation (3-D RNAV) flight control system for a small commercial jet transport. The outer-loop control system is designed using optimal stochastic limited state feedback techniques. Options investigated using the optimal limited state feedback approach include integrated versus hierarchical control loop designs, 20 samples per second versus 5 samples per second outer-loop operation and alternative Type 1 integration command errors. Command generator tracking techniques used in the digital control design enable the jet transport to automatically track arbitrary curved flight paths generated by waypoints. The performance of the design is demonstrated using detailed nonlinear aircraft simulations in the terminal area, frequency domain multi-input sigma plots, frequency domain single-input Bode plots and closed-loop poles. The response of the system to a severe wind shear during a landing approach is also presented.

  19. Optimal Frequency-Domain System Realization with Weighting

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Maghami, Peiman G.

    1999-01-01

    Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.

  20. Chaos control in solar fed DC-DC boost converter by optimal parameters using nelder-mead algorithm powered enhanced BFOA

    NASA Astrophysics Data System (ADS)

    Sudhakar, N.; Rajasekar, N.; Akhil, Saya; Jyotheeswara Reddy, K.

    2017-11-01

    The boost converter is the most desirable DC-DC power converter for renewable energy applications for its favorable continuous input current characteristics. In other hand, these DC-DC converters known as practical nonlinear systems are prone to several types of nonlinear phenomena including bifurcation, quasiperiodicity, intermittency and chaos. These undesirable effects has to be controlled for maintaining normal periodic operation of the converter and to ensure the stability. This paper presents an effective solution to control the chaos in solar fed DC-DC boost converter since the converter experiences wide range of input power variation which leads to chaotic phenomena. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Nelder-Mead Enhanced Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The results are compared with the traditional methods. The obtained results of the proposed system ensures the operation of the converter within the controllable region.

  1. 77 FR 38837 - Medicare Program; Meeting of the Medicare Economic Index Technical Advisory Panel

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-29

    ... weights, price-measurement proxies, and productivity adjustment. This meeting is open to the public in... productivity adjustment. For more information on the Panel, see the October 7, 2011 Federal Register (76 FR... recommendations regarding the MEI's inputs, input weights, price- measurement proxies, and the productivity...

  2. Long-term productivity in traditional, organic and low-input management systems of the Upper Midwest

    USDA-ARS?s Scientific Manuscript database

    Traditional cropping practices in the Upper Midwest are marked by low-diversity and high tillage disturbance. Eight years of production were evaluated to determine potential benefits of adopting low-input and organic management practices on system productivity. Increased crop rotation diversity, red...

  3. The Effects of Manure and Nitrogen Fertilizer Applications on Soil Organic Carbon and Nitrogen in a High-Input Cropping System

    PubMed Central

    Ren, Tao; Wang, Jingguo; Chen, Qing; Zhang, Fusuo; Lu, Shuchang

    2014-01-01

    With the goal of improving N fertilizer management to maximize soil organic carbon (SOC) storage and minimize N losses in high-intensity cropping system, a 6-years greenhouse vegetable experiment was conducted from 2004 to 2010 in Shouguang, northern China. Treatment tested the effects of organic manure and N fertilizer on SOC, total N (TN) pool and annual apparent N losses. The results demonstrated that SOC and TN concentrations in the 0-10cm soil layer decreased significantly without organic manure and mineral N applications, primarily because of the decomposition of stable C. Increasing C inputs through wheat straw and chicken manure incorporation couldn't increase SOC pools over the 4 year duration of the experiment. In contrast to the organic manure treatment, the SOC and TN pools were not increased with the combination of organic manure and N fertilizer. However, the soil labile carbon fractions increased significantly when both chicken manure and N fertilizer were applied together. Additionally, lower optimized N fertilizer inputs did not decrease SOC and TN accumulation compared with conventional N applications. Despite the annual apparent N losses for the optimized N treatment were significantly lower than that for the conventional N treatment, the unchanged SOC over the past 6 years might limit N storage in the soil and more surplus N were lost to the environment. Consequently, optimized N fertilizer inputs according to root-zone N management did not influence the accumulation of SOC and TN in soil; but beneficial in reducing apparent N losses. N fertilizer management in a greenhouse cropping system should not only identify how to reduce N fertilizer input but should also be more attentive to improving soil fertility with better management of organic manure. PMID:24830463

  4. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.

  5. Nonlinear optimal control for the synchronization of chaotic and hyperchaotic finance systems

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Loia, V.; Ademi, S.; Ghosh, T.

    2017-11-01

    It is possible to make specific finance systems get synchronized to other finance systems exhibiting chaotic and hyperchaotic dynamics, by applying nonlinear optimal (H-infinity) control. This signifies that chaotic behavior can be generated in finance systems by exerting a suitable control input. Actually, a lead financial system is considered which exhibits inherently chaotic dynamics. Moreover, a follower finance system is introduced having parameters in its model that inherently prohibit the appearance of chaotic dynamics. Through the application of a suitable nonlinear optimal (H-infinity) control input it is proven that the follower finance system can replicate the chaotic dynamics of the lead finance system. By applying Lyapunov analysis it is proven that asymptotically the follower finance system gets synchronized with the lead system and that the tracking error between the state variables of the two systems vanishes.

  6. Decision & Management Tools for DNAPL Sites: Optimization of Chlorinated Solvent Source and Plume Remediation Considering Uncertainty

    DTIC Science & Technology

    2010-09-01

    differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from

  7. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  8. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  9. Design principles and operating principles: the yin and yang of optimal functioning.

    PubMed

    Voit, Eberhard O

    2003-03-01

    Metabolic engineering has as a goal the improvement of yield of desired products from microorganisms and cell lines. This goal has traditionally been approached with experimental biotechnological methods, but it is becoming increasingly popular to precede the experimental phase by a mathematical modeling step that allows objective pre-screening of possible improvement strategies. The models are either linear and represent the stoichiometry and flux distribution in pathways or they are non-linear and account for the full kinetic behavior of the pathway, which is often significantly effected by regulatory signals. Linear flux analysis is simpler and requires less input information than a full kinetic analysis, and the question arises whether the consideration of non-linearities is really necessary for devising optimal strategies for yield improvements. The article analyzes this question with a generic, representative pathway. It shows that flux split ratios, which are the key criterion for linear flux analysis, are essentially sufficient for unregulated, but not for regulated branch points. The interrelationships between regulatory design on one hand and optimal patterns of operation on the other suggest the investigation of operating principles that complement design principles, like a user's manual complements the hardwiring of electronic equipment.

  10. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  11. Transform methods for precision continuum and control models of flexible space structures

    NASA Technical Reports Server (NTRS)

    Lupi, Victor D.; Turner, James D.; Chun, Hon M.

    1991-01-01

    An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.

  12. Genetics-based control of a mimo boiler-turbine plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimeo, R.M.; Lee, K.Y.

    1994-12-31

    A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.

  13. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  14. Simulation and optimization of a pulsating heat pipe using artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad

    2016-11-01

    In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.

  15. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    PubMed Central

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  16. Adjoint-Baed Optimal Control on the Pitch Angle of a Single-Bladed Vertical-Axis Wind Turbine

    NASA Astrophysics Data System (ADS)

    Tsai, Hsieh-Chen; Colonius, Tim

    2017-11-01

    Optimal control on the pitch angle of a NACA0018 single-bladed vertical-axis wind turbine (VAWT) is numerically investigated at a low Reynolds number of 1500. With fixed tip-speed ratio, the input power is minimized and mean tangential force is maximized over a specific time horizon. The immersed boundary method is used to simulate the two-dimensional, incompressible flow around a horizontal cross section of the VAWT. The problem is formulated as a PDE constrained optimization problem and an iterative solution is obtained using adjoint-based conjugate gradient methods. By the end of the longest control horizon examined, two controls end up with time-invariant pitch angles of about the same magnitude but with the opposite signs. The results show that both cases lead to a reduction in the input power but not necessarily an enhancement in the mean tangential force. These reductions in input power are due to the removal of a power-damaging phenomenon that occurs when a vortex pair is captured by the blade in the upwind-half region of a cycle. This project was supported by Caltech FLOWE center/Gordon and Betty Moore Foundation.

  17. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  18. Optimal intravenous infusion to decrease the haematocrit level in patient of DHF infection

    NASA Astrophysics Data System (ADS)

    Handayani, D.; Nuraini, N.; Saragih, R.; Wijaya, K. P.; Naiborhu, J.

    2014-02-01

    The optimal control of infusion model for Dengue Hemorrhagic Fever (DHF) infection is formulated here. The infusion model will be presented in form of haematocrit level. The input control aim to normalize the haematocrit level and is expressed as infusion volume on mL/day. The stability near the equilibrium points will be analyzed. Numerical simulation shows the dynamic of each infection compartments which gives a description of within-host dynamic of dengue virus. These results show particularly that infected compartments tend to be vanished in ±15days after the onset of the virus. In fact, without any control added, the haematocrit level will decrease but not up to the normal level. Therefore the effective haematocrit normalization should be done with the treatment control. Control treatment for a fixed time using a control input can bring haematocrit level to normal range 42-47%. The optimal control in this paper is divided into three cases, i.e. fixed end point, constrained input, and tracking haematocrit state. Each case shows different infection condition in human body. However, all cases require that the haematocrit level to be in normal range in fixed final time.

  19. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    PubMed

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  20. Ultrasound-assisted extraction of hemicellulose and phenolic compounds from bamboo bast fiber powder

    PubMed Central

    Su, Jing; Vielnascher, Robert; Silva, Carla; Cavaco-Paulo, Artur; Guebitz, Georg M.

    2018-01-01

    Ultrasound-assisted extraction of hemicellulose and phenolic compounds from bamboo bast fibre powder was investigated. The effect of ultrasonic probe depth and power input parameters on the type and amount of products extracted was assessed. The results of input energy and radical formation correlated with the calculated values for the anti-nodal point (λ/4; 16.85 mm, maximum amplitude) of the ultrasonic wave in aqueous medium. Ultrasonic treatment at optimum probe depth of 15 mm improve 2.6-fold the extraction efficiencies of hemicellulose and phenolic lignin compounds from bamboo bast fibre powder. LC-Ms-Tof (liquid chromatography-mass spectrometry-time of flight) analysis indicated that ultrasound led to the extraction of coniferyl alcohol, sinapyl alcohol, vanillic acid, cellobiose, in contrast to boiling water extraction only. At optimized conditions, ultrasound caused the formation of radicals confirmed by the presence of (+)-pinoresinol which resulted from the radical coupling of coniferyl alcohol. Ultrasounds revealed to be an efficient methodology for the extraction of hemicellulosic and phenolic compounds from woody bamboo without the addition of harmful solvents. PMID:29856764

  1. Electrical resistivity tomography to delineate greenhouse soil variability

    NASA Astrophysics Data System (ADS)

    Rossi, R.; Amato, M.; Bitella, G.; Bochicchio, R.

    2013-03-01

    Appropriate management of soil spatial variability is an important tool for optimizing farming inputs, with the result of yield increase and reduction of the environmental impact in field crops. Under greenhouses, several factors such as non-uniform irrigation and localized soil compaction can severely affect yield and quality. Additionally, if soil spatial variability is not taken into account, yield deficiencies are often compensated by extra-volumes of crop inputs; as a result, over-irrigation and overfertilization in some parts of the field may occur. Technology for spatially sound management of greenhouse crops is therefore needed to increase yield and quality and to address sustainability. In this experiment, 2D-electrical resistivity tomography was used as an exploratory tool to characterize greenhouse soil variability and its relations to wild rocket yield. Soil resistivity well matched biomass variation (R2=0.70), and was linked to differences in soil bulk density (R2=0.90), and clay content (R2=0.77). Electrical resistivity tomography shows a great potential in horticulture where there is a growing demand of sustainability coupled with the necessity of stabilizing yield and product quality.

  2. Earth observing system. Output data products and input requirements, version 2.0. Volume 1: Instrument data product characteristics

    NASA Technical Reports Server (NTRS)

    Lu, Yun-Chi; Chang, Hyo Duck; Krupp, Brian; Kumar, Ravindra; Swaroop, Anand

    1992-01-01

    Information on Earth Observing System (EOS) output data products and input data requirements that has been compiled by the Science Processing Support Office (SPSO) at GSFC is presented. Since Version 1.0 of the SPSO Report was released in August 1991, there have been significant changes in the EOS program. In anticipation of a likely budget cut for the EOS Project, NASA HQ restructured the EOS program. An initial program consisting of two large platforms was replaced by plans for multiple, smaller platforms, and some EOS instruments were either deselected or descoped. Updated payload information reflecting the restructured EOS program superseding the August 1991 version of the SPSO report is included. This report has been expanded to cover information on non-EOS data products, and consists of three volumes (Volumes 1, 2, and 3). Volume 1 provides information on instrument outputs and input requirements. Volume 2 is devoted to Interdisciplinary Science (IDS) outputs and input requirements, including the 'best' and 'alternative' match analysis. Volume 3 provides information about retrieval algorithms, non-EOS input requirements of instrument teams and IDS investigators, and availability of non-EOS data products at seven primary Distributed Active Archive Centers (DAAC's).

  3. Recent developments in drying of food products

    NASA Astrophysics Data System (ADS)

    Valarmathi, T. N.; Sekar, S.; Purushothaman, M.; Sekar, S. D.; Rama Sharath Reddy, Maddela; Reddy, Kancham Reddy Naveen Kumar

    2017-05-01

    Drying is a dehydration process to preserve agricultural products for long period usage. The most common and cheapest method is open sun drying in which the products are simply laid on ground, road, mats, roof, etc. But the open sun drying has some disadvantages like dependent on good weather, contamination by dust, birds and animals consume a considerable quantity, slow drying rate and damages due to strong winds and rain. To overcome these difficulties solar dryers are developed with closed environment for drying agricultural products effectively. To obtain good quality food with reduced energy consumption, selection of appropriate drying process and proper input parameters is essential. In recent years several researchers across the world have developed new drying systems for improving the product quality, increasing the drying rate, decreasing the energy consumption, etc. Some of the new systems are fluidized bed, vibrated fluidized bed, desiccant, microwave, vacuum, freeze, infrared, intermittent, electro hydrodynamic and hybrid dryers. In this review the most recent progress in the field of drying of agricultural food products such as new methods, new products and modeling and optimization techniques has been presented. Challenges and future directions are also highlighted. The review will be useful for new researchers entering into this ever needed and ever growing field of engineering.

  4. Assessment of bio-fuel options for solid oxide fuel cell applications

    NASA Astrophysics Data System (ADS)

    Lin, Jiefeng

    Rising concerns of inadequate petroleum supply, volatile crude oil price, and adverse environmental impacts from using fossil fuels have spurred the United States to promote bio-fuel domestic production and develop advanced energy systems such as fuel cells. The present dissertation analyzed the bio-fuel applications in a solid oxide fuel cell-based auxiliary power unit from environmental, economic, and technological perspectives. Life cycle assessment integrated with thermodynamics was applied to evaluate the environmental impacts (e.g., greenhouse gas emission, fossil energy consumption) of producing bio-fuels from waste biomass. Landfill gas from municipal solid wastes and biodiesel from waste cooking oil are both suggested as the promising bio-fuel options. A nonlinear optimization model was developed with a multi-objective optimization technique to analyze the economic aspect of biodiesel-ethanol-diesel ternary blends used in transportation sectors and capture the dynamic variables affecting bio-fuel productions and applications (e.g., market disturbances, bio-fuel tax credit, policy changes, fuel specification, and technological innovation). A single-tube catalytic reformer with rhodium/ceria-zirconia catalyst was used for autothermal reformation of various heavy hydrocarbon fuels (e.g., diesel, biodiesel, biodiesel-diesel, and biodiesel-ethanol-diesel) to produce a hydrogen-rich stream reformates suitable for use in solid oxide fuel cell systems. A customized mixing chamber was designed and integrated with the reformer to overcome the technical challenges of heavy hydrocarbon reformation. A thermodynamic analysis, based on total Gibbs free energy minimization, was implemented to optimize the operating environment for the reformations of various fuels. This was complimented by experimental investigations of fuel autothermal reformation. 25% biodiesel blended with 10% ethanol and 65% diesel was determined to be viable fuel for use on a truck travelling with diesel engine and truck idling with fuel cell auxiliary power unit system. The customized nozzle used for fuel vaporization and mixing achieved homogenous atomization of input hydrocarbon fuels (e.g., diesel, biodiesel, diesel-biodiesel blend, and biodiesel-ethanol-diesel), and improved the performance of fuel catalytic reformation. Given the same operating condition (reforming temperature, total oxygen content, water input flow, and gas hourly space velocity), the hydrocarbon reforming performance follows the trend of diesel > biodiesel-ethanol-diesel > diesel-biodiesel blend > biodiesel (i.e., diesel catalytic reformation has the highest hydrogen production, lowest risk of carbon formation, and least possibility of hot spot occurrence). These results provide important new insight into the use of bio-fuels and bio-fuel blends as a primary fuel source for solid oxide fuel cell applications.

  5. RBT-GA: a novel metaheuristic for solving the multiple sequence alignment problem

    PubMed Central

    Taheri, Javid; Zomaya, Albert Y

    2009-01-01

    Background Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. Results This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. Conclusion RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences. PMID:19594869

  6. Optimizing the integrated efficiency for water resource utilization:based on Economic perspective

    NASA Astrophysics Data System (ADS)

    Gao, L.; Yoshikawa, S.; Kanae, S.

    2014-12-01

    At present, total global water withdrawal is increasing and water shortage will become a crucial issue around the world. In the 2050, the water withdrawal will exceed the water which we can get it from the river and underground. One of the ways of alleviating water scarcity is increasing the efficiency of water use without development of additional water supplies. In previous literatures about water use efficiency, there are less discussion about the temporal efficiency change with corresponding characteristics of water resource. The main aim of this paper is to estimate the temporal efficiency of water use during 2011-2020 for proposing how to use efficiently the limited water. This paper used dynamic Data Envelope Analysis to estimate the efficiency which is the ratio of the sum of weighted outputs to the sum of weighted inputs. Our model uses cost of agricultural production as input indices and production value of the agriculture as output index,water withdrawal as temporal linkage. We mainly work on the two problems: Firstly, finding out the evident how much the value of water use efficiencies are in each target country; Secondly, adjusting the output value to make those countries which water use inefficiency reach to DEA efficient. The results provide a scientific reference to make rational allocation and the sustainable use of water resources would be realized.

  7. Automation of POST Cases via External Optimizer and "Artificial p2" Calculation

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Mathew R.

    2017-01-01

    During early conceptual design of complex systems, speed and accuracy are often at odds with one another. While many characteristics of the design are fluctuating rapidly during this phase there is nonetheless a need to acquire accurate data from which to down-select designs as these decisions will have a large impact upon program life-cycle cost. Therefore enabling the conceptual designer to produce accurate data in a timely manner is tantamount to program viability. For conceptual design of launch vehicles, trajectory analysis and optimization is a large hurdle. Tools such as the industry standard Program to Optimize Simulated Trajectories (POST) have traditionally required an expert in the loop for setting up inputs, running the program, and analyzing the output. The solution space for trajectory analysis is in general non-linear and multi-modal requiring an experienced analyst to weed out sub-optimal designs in pursuit of the global optimum. While an experienced analyst presented with a vehicle similar to one which they have already worked on can likely produce optimal performance figures in a timely manner, as soon as the "experienced" or "similar" adjectives are invalid the process can become lengthy. In addition, an experienced analyst working on a similar vehicle may go into the analysis with preconceived ideas about what the vehicle's trajectory should look like which can result in sub-optimal performance being recorded. Thus, in any case but the ideal either time or accuracy can be sacrificed. In the authors' previous work a tool called multiPOST was created which captures the heuristics of a human analyst over the process of executing trajectory analysis with POST. However without the instincts of a human in the loop, this method relied upon Monte Carlo simulation to find successful trajectories. Overall the method has mixed results, and in the context of optimizing multiple vehicles it is inefficient in comparison to the method presented POST's internal optimizer functions like any other gradient-based optimizer. It has a specified variable to optimize whose value is represented as optval, a set of dependent constraints to meet with associated forms and tolerances whose value is represented as p2, and a set of independent variables known as the u-vector to modify in pursuit of optimality. Each of these quantities are calculated or manipulated at a certain phase within the trajectory. The optimizer is further constrained by the requirement that the input u-vector must result in a trajectory which proceeds through each of the prescribed events in the input file. For example, if the input u-vector causes the vehicle to crash before it can achieve the orbital parameters required for a parking orbit, then the run will fail without engaging the optimizer, and a p2 value of exactly zero is returned. This poses a problem, as this "non-connecting" region of the u-vector space is far larger than the "connecting" region which returns a non-zero value of p2 and can be worked on by the internal optimizer. Finding this connecting region and more specifically the global optimum within this region has traditionally required the use of an expert analyst.

  8. Robust fuel- and time-optimal control of uncertain flexible space structures

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken

    1993-01-01

    The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.

  9. Farm Management in Organic and Conventional Dairy Production Systems Based on Pasture in Southern Brazil and Its Consequences on Production and Milk Quality

    PubMed Central

    Kuhnen, Shirley; Stibuski, Rudinei Butka; Honorato, Luciana Aparecida; Pinheiro Machado Filho, Luiz Carlos

    2015-01-01

    Simple Summary This study provides the characteristics of the conventional high input (C-HI), conventional low input (C-LI), and organic low input (O-LI) pasture-based production systems used in Southern Brazil, and its consequences on production and milk quality. C-HI farms had larger farms and herds, annual pasture with higher inputs and milk yield, whereas O-LI had smaller farms and herds, perennial pastures with lowest input and milk yields; C-LI was in between. O-LI farms may contribute to eco-system services, but low milk yield is a major concern. Hygienic and microbiological milk quality was poor for all farms and needs to be improved. Abstract Pasture-based dairy production is used widely on family dairy farms in Southern Brazil. This study investigates conventional high input (C-HI), conventional low input (C-LI), and organic low input (O-LI) pasture-based systems and their effects on quantity and quality of the milk produced. We conducted technical site visits and interviews monthly over one year on 24 family farms (n = 8 per type). C-HI farms had the greatest total area (28.9 ha), greatest percentage of area with annual pasture (38.7%), largest number of lactating animals (26.2) and greatest milk yield per cow (22.8 kg·day−1). O-LI farms had the largest perennial pasture area (52.3%), with the greatest botanical richness during all seasons. Area of perennial pasture was positively correlated with number of species consumed by the animals (R2 = 0.74). Milk from O-LI farms had higher levels of fat and total solids only during the winter. Hygienic and microbiological quality of the milk was poor for all farms and need to be improved. C-HI farms had high milk yield related to high input, C-LI had intermediate characteristics and O-LI utilized a year round perennial pasture as a strategy to diminish the use of supplements in animal diets, which is an important aspect in ensuring production sustainability. PMID:26479369

  10. Preferences in Data Production Planning

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Brafman, Ronen; Pang, Wanlin

    2005-01-01

    This paper discusses the data production problem, which consists of transforming a set of (initial) input data into a set of (goal) output data. There are typically many choices among input data and processing algorithms, each leading to significantly different end products. To discriminate among these choices, the planner supports an input language that provides a number of constructs for specifying user preferences over data (and plan) properties. We discuss these preference constructs, how we handle them to guide search, and additional challenges in the area of preference management that this important application domain offers.

  11. Performance assessment of deterministic and probabilistic weather predictions for the short-term optimization of a tropical hydropower reservoir

    NASA Astrophysics Data System (ADS)

    Mainardi Fan, Fernando; Schwanenberg, Dirk; Alvarado, Rodolfo; Assis dos Reis, Alberto; Naumann, Steffi; Collischonn, Walter

    2016-04-01

    Hydropower is the most important electricity source in Brazil. During recent years, it accounted for 60% to 70% of the total electric power supply. Marginal costs of hydropower are lower than for thermal power plants, therefore, there is a strong economic motivation to maximize its share. On the other hand, hydropower depends on the availability of water, which has a natural variability. Its extremes lead to the risks of power production deficits during droughts and safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy. While deterministic forecasts and optimization schemes are the established techniques for the short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques receives growing attention and a number of researches have shown its benefit. The present work shows one of the first hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil. The case study is the hydropower project (HPP) Três Marias, located in southeast Brazil. The HPP reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control at Pirapora City located 120 km downstream of the dam. In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts with 50 ensemble members of the ECMWF are used as forcing of the MGB-IPH hydrological model to generate streamflow forecasts over a period of 2 years. The online optimization depends on a deterministic and multi-stage stochastic version of a model predictive control scheme. Results for the perfect forecasts show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days. In comparison, the use of actual forecasts with shorter lead times of up to 15 days shows the practical benefit of actual operational data. It appears that the use of stochastic optimization combined with ensemble forecasts leads to a significant higher level of flood protection without compromising the HPP's energy production.

  12. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.

    1978-05-01

    This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.

  13. Three-input majority function as the unique optimal function for the bias amplification using nonlocal boxes

    NASA Astrophysics Data System (ADS)

    Mori, Ryuhei

    2016-11-01

    Brassard et al. [Phys. Rev. Lett. 96, 250401 (2006), 10.1103/PhysRevLett.96.250401] showed that shared nonlocal boxes with a CHSH (Clauser, Horne, Shimony, and Holt) probability greater than 3/+√{6 } 6 yield trivial communication complexity. There still exists a gap with the maximum CHSH probability 2/+√{2 } 4 achievable by quantum mechanics. It is an interesting open question to determine the exact threshold for the trivial communication complexity. Brassard et al.'s idea is based on recursive bias amplification by the three-input majority function. It was not obvious if another choice of function exhibits stronger bias amplification. We show that the three-input majority function is the unique optimal function, so that one cannot improve the threshold 3/+√{6 } 6 by Brassard et al.'s bias amplification. In this work, protocols for computing the function used for the bias amplification are restricted to be nonadaptive protocols or a particular adaptive protocol inspired by Pawłowski et al.'s protocol for information causality [Nature (London) 461, 1101 (2009), 10.1038/nature08400]. We first show an adaptive protocol inspired by Pawłowski et al.'s protocol, and then show that the adaptive protocol improves upon nonadaptive protocols. Finally, we show that the three-input majority function is the unique optimal function for the bias amplification if we apply the adaptive protocol to each step of the bias amplification.

  14. Flight instrument and telemetry response and its inversion

    NASA Technical Reports Server (NTRS)

    Weinberger, M. R.

    1971-01-01

    Mathematical models of rate gyros, servo accelerometers, pressure transducers, and telemetry systems were derived and their parameters were obtained from laboratory tests. Analog computer simulations were used extensively for verification of the validity for fast and large input signals. An optimal inversion method was derived to reconstruct input signals from noisy output signals and a computer program was prepared.

  15. Measuring circuit

    DOEpatents

    Sun, Shan C.; Chaprnka, Anthony G.

    1977-01-11

    An automatic gain control circuit functions to adjust the magnitude of an input signal supplied to a measuring circuit to a level within the dynamic range of the measuring circuit while a log-ratio circuit adjusts the magnitude of the output signal from the measuring circuit to the level of the input signal and optimizes the signal-to-noise ratio performance of the measuring circuit.

  16. Statistical linearization for multi-input/multi-output nonlinearities

    NASA Technical Reports Server (NTRS)

    Lin, Ching-An; Cheng, Victor H. L.

    1991-01-01

    Formulas are derived for the computation of the random input-describing functions for MIMO nonlinearities; these straightforward and rigorous derivations are based on the optimal mean square linear approximation. The computations involve evaluations of multiple integrals. It is shown that, for certain classes of nonlinearities, multiple-integral evaluations are obviated and the computations are significantly simplified.

  17. TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow

    NASA Technical Reports Server (NTRS)

    Chang, J. F.; Lan, C. Edward

    1987-01-01

    The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.

  18. Positional glow curve simulation for thermoluminescent detector (TLD) system design

    NASA Astrophysics Data System (ADS)

    Branch, C. J.; Kearfott, K. J.

    1999-02-01

    Multi- and thin element dosimeters, variable heating rate schemes, and glow-curve analysis have been employed to improve environmental and personnel dosimetry using thermoluminescent detectors (TLDs). Detailed analysis of the effects of errors and optimization of techniques would be highly desirable. However, an understanding of the relationship between TL light production, light attenuation, and precise heating schemes is made difficult because of experimental challenges involved in measuring positional TL light production and temperature variations as a function of time. This work reports the development of a general-purpose computer code, thermoluminescent detector simulator, TLD-SIM, to simulate the heating of any TLD type using a variety of conventional and experimental heating methods including pulsed focused or unfocused lasers with Gaussian or uniform cross sections, planchet, hot gas, hot finger, optical, infrared, or electrical heating. TLD-SIM has been used to study the impact on the TL light production of varying the input parameters which include: detector composition, heat capacity, heat conductivity, physical size, and density; trapped electron density, the frequency factor of oscillation of electrons in the traps, and trap-conduction band potential energy difference; heating scheme source terms and heat transfer boundary conditions; and TL light scatter and attenuation coefficients. Temperature profiles and glow curves as a function of position time, as well as the corresponding temporally and/or spatially integrated glow values, may be plotted while varying any of the input parameters. Examples illustrating TLD system functions, including glow curve variability, will be presented. The flexible capabilities of TLD-SIM promises to enable improved TLD system design.

  19. Optical Sensing of Ecosystem Carbon Fluxes Combining Spectral Reflectance Indices with Solar Induced Fluorescence

    NASA Astrophysics Data System (ADS)

    Huemmrich, K. F.; Middleton, E.; Corp, L. A.; Campbell, P. K.; Kustas, W. P.

    2014-12-01

    Optical sampling of spectral reflectance and solar induced fluorescence provide information on the physiological status of vegetation that can be used to infer stress responses and estimates of production. Multiple repeated observations are required to observe the effects of changing environmental conditions on vegetation. This study examines the use of optical signals to determine inputs to a light use efficiency (LUE) model describing productivity of a cornfield where repeated observations of carbon flux, spectral reflectance and fluorescence were collected. Data were collected at the Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) fields (39.03°N, 76.85°W) at USDA Beltsville Agricultural Research Center. Agricultural Research Service researchers measured CO2 fluxes using eddy covariance methods throughout the growing season. Optical measurements were made from the nearby tower supporting the NASA FUSION sensors. The sensor system consists of two dual channel, upward and downward looking, spectrometers used to simultaneously collect high spectral resolution measurements of reflected and fluoresced light from vegetation canopies. Estimates of chlorophyll fluorescence, combined with measures of vegetation pigment content and the Photosynthetic Reflectance Index (PRI) derived from the spectral reflectance are compared with CO2 fluxes over diurnal periods for multiple days. PRI detects changes in Xanthophyll cycle pigments using reflectance at 531 nm compared to a reference band at 570 nm. The relationships among the different optical measurements indicate that they are providing different types of information on the vegetation and that combinations of these measurements provide improved retrievals of CO2 fluxes than any index alone.

  20. Optical Sensing of Ecosystem Carbon Fluxes Combining Spectral Reflectance Indices with Solar Induced Fluorescence

    NASA Astrophysics Data System (ADS)

    Huemmrich, K. F.; Corp, L.; Campbell, P. K.; Cook, B. D.; Middleton, E.; Cheng, Y.; Zhang, Q.; Russ, A.; Kustas, W. P.

    2013-12-01

    Optical sampling of spectral reflectance and solar induced fluorescence provide information on the physiological status of vegetation that can be used to infer stress responses and estimates of production. Multiple repeated observations can observe the effects of changing environmental conditions on vegetation. This study examines the use of optical signals to determine inputs to a light use efficiency (LUE) model describing productivity of a cornfield where repeated observations of carbon flux, spectral reflectance and fluorescence were collected. Data were collected at the Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) fields (39.03°N, 76.85°W) at USDA Beltsville Agricultural Research Center. Agricultural Research Service researchers measured CO2 fluxes using eddy covariance methods throughout the growing season. Optical measurements were made from the nearby tower supporting the NASA FUSION sensors. This sensor system consists of two dual channel, upward and downward looking, spectrometers used to simultaneously collect high spectral resolution measurements of reflected and fluoresced light from vegetation canopies. Estimates of chlorophyll fluorescence, combined with measures of vegetation pigment content and the Photosynthetic Reflectance Index (PRI) derived from the spectral reflectance are compared with CO2 fluxes over diurnal periods for multiple days. PRI detects changes in Xanthophyll cycle pigments using reflectance at 531 nm compared to a reference band at 570 nm. The relationships among the different optical measurements indicate that they are providing different types of information on the vegetation and that combinations of these measurements provide improved retrievals of CO2 fluxes than any index alone.

  1. The importance of long‐term experiments in agriculture: their management to ensure continued crop production and soil fertility; the Rothamsted experience

    PubMed Central

    Johnston, A. E.

    2018-01-01

    Summary Long‐term field experiments that test a range of treatments and are intended to assess the sustainability of crop production, and thus food security, must be managed actively to identify any treatment that is failing to maintain or increase yields. Once identified, carefully considered changes can be made to the treatment or management, and if they are successful yields will change. If suitable changes cannot be made to an experiment to ensure its continued relevance to sustainable crop production, then it should be stopped. Long‐term experiments have many other uses. They provide a field resource and samples for research on plant and soil processes and properties, especially those properties where change occurs slowly and affects soil fertility. Archived samples of all inputs and outputs are an invaluable source of material for future research, and data from current and archived samples can be used to develop models to describe soil and plant processes. Such changes and uses in the Rothamsted experiments are described, and demonstrate that with the appropriate crop, soil and management, acceptable yields can be maintained for many years, with either organic manure or inorganic fertilizers. Highlights Long‐term experiments demonstrate sustainability and increases in crop yield when managed to optimize soil fertility.Shifting individual response curves into coincidence increases understanding of the factors involved.Changes in inorganic and organic pollutants in archived crop and soil samples are related to inputs over time.Models describing soil processes are developed from current and archived soil data. PMID:29527119

  2. Quantitative multiphase model for hydrothermal liquefaction of algal biomass

    DOE PAGES

    Li, Yalin; Leow, Shijie; Fedders, Anna C.; ...

    2017-01-17

    Here, optimized incorporation of hydrothermal liquefaction (HTL, reaction in water at elevated temperature and pressure) within an integrated biorefinery requires accurate models to predict the quantity and quality of all HTL products. Existing models primarily focus on biocrude product yields with limited consideration for biocrude quality and aqueous, gas, and biochar co-products, and have not been validated with an extensive collection of feedstocks. In this study, HTL experiments (300 °C, 30 min) were conducted using 24 different batches of microalgae feedstocks with distinctive feedstock properties, which resulted in a wide range of biocrude (21.3–54.3 dry weight basis, dw%), aqueous (4.6–31.2more » dw%), gas (7.1–35.6 dw%), and biochar (1.3–35.0 dw%) yields. Based on these results, a multiphase component additivity (MCA) model was introduced to predict yields and characteristics of the HTL biocrude product and aqueous, gas, and biochar co-products, with only feedstock biochemical (lipid, protein, carbohydrate, and ash) and elemental (C/H/N) composition as model inputs. Biochemical components were determined to distribute across biocrude product/HTL co-products as follows: lipids to biocrude; proteins to biocrude > aqueous > gas; carbohydrates to gas ≈ biochar > biocrude; and ash to aqueous > biochar. Modeled quality indicators included biocrude C/H/N contents, higher heating value (HHV), and energy recovery (ER); aqueous total organic carbon (TOC) and total nitrogen (TN) contents; and biochar carbon content. The model was validated with HTL data from the literature, the potential to expand the application of this modeling framework to include waste biosolids (e.g., wastewater sludge, manure) was explored, and future research needs for industrial application were identified. Ultimately, the MCA model represents a critical step towards the integration of cultivation models with downstream HTL and biorefinery operations to enable system-level optimization, valorization of co-product streams (e.g., through catalytic hydrothermal gasification and nutrient recovery), and the navigation of tradeoffs across the value chain.« less

  3. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  4. 77 FR 34050 - Medicare Program; Meeting of the Medicare Economic Index Technical Advisory Panel-June 25, 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ... productivity adjustment. This meeting is open to the public in accordance with the Federal Advisory Committee...). The review will include the inputs, input weights, price-measurement proxies, and productivity... the index's productivity adjustment. II. Meeting Format This meeting is open to the public. There will...

  5. SRB Data and Information

    Atmospheric Science Data Center

    2017-01-13

    ... grid. Model inputs of cloud amounts and other atmospheric state parameters are also available in some of the data sets. Primary inputs to ... Analysis (SMOBA), an assimilation product from NOAA's Climate Prediction Center. SRB products are reformatted for the use of ...

  6. Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs

    NASA Astrophysics Data System (ADS)

    Jayathilake, D. I.; Smith, T. J.

    2017-12-01

    Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.

  7. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  8. Nitrogen use efficiency and crop production: Patterns of regional variation in the United States, 1987-2012.

    PubMed

    Swaney, Dennis P; Howarth, Robert W; Hong, Bongghi

    2018-04-17

    National-level summaries of crop production and nutrient use efficiency, important for international comparisons, only partially elucidate agricultural dynamics within a country. Agricultural production and associated environmental impacts in large countries vary significantly because of regional differences in crops, climate, resource use and production practices. Here, we review patterns of regional crop production, nitrogen use efficiency (NUE), and major inputs of nitrogen to US crops over 1987-2012, based on the Farm Resource Regions developed by the Economic Research Service (USDA-ERS). Across the US, NUE generally decreased over time over the period studied, mainly due to increased use in mineral N fertilizer above crop N requirements. The Heartland region dominates production of major crops and thus tends to drive national patterns, showing linear response of crop production to nitrogen inputs broadly consistent with an earlier analysis of global patterns of country-scale data by Lassaletta et al. (2014). Most other regions show similar responses, but the Eastern Uplands region shows a negative response to nitrogen inputs, and the Southern Seaboard shows no significant relationship. The regional differences appear as two branches in the response of aggregate production to N inputs on a cropland area basis, but not on a total area basis, suggesting that the type of scaling used is critical under changing cropland area. Nitrogen use efficiency (NUE) is positively associated with fertilizer as a percentage of N inputs in four regions, and all regions considered together. NUE is positively associated with crop N fixation in all regions except Northern Great Plains. It is negatively associated with manure (livestock excretion); in the US, manure is still treated largely as a waste to be managed rather than a nutrient resource. This significant regional variation in patterns of crop production and NUE vs N inputs, has implications for environmental quality and food security. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Sequential use of simulation and optimization in analysis and planning

    Treesearch

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  10. Fertilizer consumption and energy input for 16 crops in the United States

    USGS Publications Warehouse

    Amenumey, Sheila E.; Capel, Paul D.

    2014-01-01

    Fertilizer use by U.S. agriculture has increased over the past few decades. The production and transportation of fertilizers (nitrogen, N; phosphorus, P; potassium, K) are energy intensive. In general, about a third of the total energy input to crop production goes to the production of fertilizers, one-third to mechanization, and one-third to other inputs including labor, transportation, pesticides, and electricity. For some crops, fertilizer is the largest proportion of total energy inputs. Energy required for the production and transportation of fertilizers, as a percentage of total energy input, was determined for 16 crops in the U.S. to be: 19–60% for seven grains, 10–41% for two oilseeds, 25% for potatoes, 12–30% for three vegetables, 2–23% for two fruits, and 3% for dry beans. The harvested-area weighted-average of the fraction of crop fertilizer energy to the total input energy was 28%. The current sources of fertilizers for U.S. agriculture are dependent on imports, availability of natural gas, or limited mineral resources. Given these dependencies plus the high energy costs for fertilizers, an integrated approach for their efficient and sustainable use is needed that will simultaneously maintain or increase crop yields and food quality while decreasing adverse impacts on the environment.

  11. Adaptive hybrid optimal quantum control for imprecisely characterized systems.

    PubMed

    Egger, D J; Wilhelm, F K

    2014-06-20

    Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.

  12. Classification of wines according to their production regions with the contained trace elements using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Yan, Chunhua; Zhang, Tianlong; Tang, Hongsheng; Li, Hua; Yu, Jialu; Bernard, Jérôme; Chen, Li; Martin, Serge; Delepine-Gilon, Nicole; Bocková, Jana; Veis, Pavel; Chen, Yanping; Yu, Jin

    2017-09-01

    Laser-induced breakdown spectroscopy (LIBS) has been applied to classify French wines according to their production regions. The use of the surface-assisted (or surface-enhanced) sample preparation method enabled a sub-ppm limit of detection (LOD), which led to the detection and identification of at least 22 metal and nonmetal elements in a typical wine sample including majors, minors and traces. An ensemble of 29 bottles of French wines, either red or white wines, from five production regions, Alsace, Bourgogne, Beaujolais, Bordeaux and Languedoc, was analyzed together with a wine from California, considered as an outlier. A non-supervised classification model based on principal component analysis (PCA) was first developed for the classification. The results showed a limited separation power of the model, which however allowed, in a step by step approach, to understand the physical reasons behind each step of sample separation and especially to observe the influence of the matrix effect in the sample classification. A supervised classification model was then developed based on random forest (RF), which is in addition a nonlinear algorithm. The obtained classification results were satisfactory with, when the parameters of the model were optimized, a classification accuracy of 100% for the tested samples. We especially discuss in the paper, the effect of spectrum normalization with an internal reference, the choice of input variables for the classification models and the optimization of parameters for the developed classification models.

  13. Analysis of long term trends of precipitation estimates acquired using radar network in Turkey

    NASA Astrophysics Data System (ADS)

    Tugrul Yilmaz, M.; Yucel, Ismail; Kamil Yilmaz, Koray

    2016-04-01

    Precipitation estimates, a vital input in many hydrological and agricultural studies, can be obtained using many different platforms (ground station-, radar-, model-, satellite-based). Satellite- and model-based estimates are spatially continuous datasets, however they lack the high resolution information many applications often require. Station-based values are actual precipitation observations, however they suffer from their nature that they are point data. These datasets may be interpolated however such end-products may have large errors over remote locations with different climate/topography/etc than the areas stations are installed. Radars have the particular advantage of having high spatial resolution information over land even though accuracy of radar-based precipitation estimates depends on the Z-R relationship, mountain blockage, target distance from the radar, spurious echoes resulting from anomalous propagation of the radar beam, bright band contamination and ground clutter. A viable method to obtain spatially and temporally high resolution consistent precipitation information is merging radar and station data to take advantage of each retrieval platform. An optimally merged product is particularly important in Turkey where complex topography exerts strong controls on the precipitation regime and in turn hampers observation efforts. There are currently 10 (additional 7 are planned) weather radars over Turkey obtaining precipitation information since 2007. This study aims to optimally merge radar precipitation data with station based observations to introduce a station-radar blended precipitation product. This study was supported by TUBITAK fund # 114Y676.

  14. Legacy Phosphorus Effect and Need to Re-calibrate Soil Test P Methods for Organic Crop Production.

    NASA Astrophysics Data System (ADS)

    Dao, Thanh H.; Schomberg, Harry H.; Cavigelli, Michel A.

    2015-04-01

    Phosphorus (P) is a required nutrient for the normal development and growth of plants and supplemental P is needed in most cultivated soils. Large inputs of cover crop residues and nutrient-rich animal manure are added to supply needed nutrients to promote the optimal production of organic grain crops and forages. The effects of crop rotations and tillage management of the near-surface zone on labile phosphorus (P) forms were studied in soil under conventional and organic crop management systems in the mid-Atlantic region of the U.S. after 18 years due to the increased interest in these alternative systems. Soil nutrient surpluses likely caused by low grain yields resulted in large pools of exchangeable phosphate-P and equally large pools of enzyme-labile organic P (Po) in soils under organic management. In addition, the difference in the P loading rates between the conventional and organic treatments as guided by routine soil test recommendations suggested that overestimating plant P requirements contributed to soil P surpluses because routine soil testing procedures did not account for the presence and size of the soil enzyme-labile Po pool. The effect of large P additions is long-lasting as they continued to contribute to elevated soil total bioactive P concentrations 12 or more years later. Consequently, accurate estimates of crop P requirements, P turnover in soil, and real-time plant and soil sensing systems are critical considerations to optimally manage manure-derived nutrients in organic crop production.

  15. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  16. Optimal Implementations for Reliable Circadian Clocks

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko; Arita, Masanori

    2014-09-01

    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  17. On optimal control of linear systems in the presence of multiplicative noise

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1976-01-01

    This correspondence considers the problem of optimal regulator design for discrete time linear systems subjected to white state-dependent and control-dependent noise in addition to additive white noise in the input and the observations. A pseudo-deterministic problem is first defined in which multiplicative and additive input disturbances are present, but noise-free measurements of the complete state vector are available. This problem is solved via discrete dynamic programming. Next is formulated the problem in which the number of measurements is less than that of the state variables and the measurements are contaminated with state-dependent noise. The inseparability of control and estimation is brought into focus, and an 'enforced separation' solution is obtained via heuristic reasoning in which the control gains are shown to be the same as those in the pseudo-deterministic problem. An optimal linear state estimator is given in order to implement the controller.

  18. Global Optimization Ensemble Model for Classification Methods

    PubMed Central

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  19. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  20. Dynamic Analysis of the Temperature and the Concentration Profiles of an Industrial Rotary Kiln Used in Clinker Production.

    PubMed

    Rodrigues, Diulia C Q; Soares, Atílio P; Costa, Esly F; Costa, Andréa O S

    2017-01-01

    Cement is one of the most used building materials in the world. The process of cement production involves numerous and complex reactions that occur under different temperatures. Thus, there is great interest in the optimization of cement manufacturing. Clinker production is one of the main steps of cement production and it occurs inside the kiln. In this paper, the dry process of clinker production is analysed in a rotary kiln that operates in counter flow. The main phenomena involved in clinker production is as follows: free residual water evaporation of raw material, decomposition of magnesium carbonate, decarbonation, formation of C3A and C4AF, formation of dicalcium silicate, and formation of tricalcium silicate. The main objective of this study was to propose a mathematical model that realistically describes the temperature profile and the concentration of clinker components in a real rotary kiln. In addition, the influence of different speeds of inlet gas and solids in the system was analysed. The mathematical model is composed of partial differential equations. The model was implemented in Mathcad (available at CCA/UFES) and solved using industrial input data. The proposal model is satisfactory to describe the temperature and concentration profiles of a real rotary kiln.

  1. Shifts in allochthonous input and autochthonous production in streams along an agricultural land-use gradient

    Treesearch

    Elizabeth Hagen; Matthew McTammany; Jackson Webster; Ernest Benfield

    2010-01-01

    Relative contributions of allochthonous inputs and autochthonous production vary depending on terrestrial land use and biome. Terrestrially derived organic matter and in-stream primary production were measured in 12 headwater streams along an agricultural land-use gradient. Streams were examined to see how carbon (C) supply shifts from forested streams receiving...

  2. Econometric analysis of fire suppression production functions for large wildland fires

    Treesearch

    Thomas P. Holmes; David E. Calkin

    2013-01-01

    In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...

  3. Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.

    ERIC Educational Resources Information Center

    Simpson, William A.

    In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…

  4. Aero/structural tailoring of engine blades (AERO/STAEBL)

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1988-01-01

    This report describes the Aero/Structural Tailoring of Engine Blades (AERO/STAEBL) program, which is a computer code used to perform engine fan and compressor blade aero/structural numerical optimizations. These optimizations seek a blade design of minimum operating cost that satisfies realistic blade design constraints. This report documents the overall program (i.e., input, optimization procedures, approximate analyses) and also provides a detailed description of the validation test cases.

  5. Feed and manure use in low-N-input and high-N-input dairy cattle production systems

    NASA Astrophysics Data System (ADS)

    Powell, J. Mark

    2014-11-01

    In most parts of Sub-Saharan Africa fertilizers and feeds are costly, not readily available and used sparingly in agricultural production. In many parts of Western Europe, North America, and Oceania fertilizers and feeds are relatively inexpensive, readily available and used abundantly to maximize profitable agricultural production. A case study, dairy systems approach was used to illustrate how differences in feed and manure management in a low-N-input dairy cattle system (Niger, West Africa) and a high-N-input dairy production system (Wisconsin, USA) impact agricultural production and environmental N loss. In Niger, an additional daily feed N intake of 114 g per dairy animal unit (AU, 1000 kg live weight) could increase annual milk production from 560 to 1320 kg AU-1, and the additional manure N could greatly increase millet production. In Wisconsin, reductions in daily feed N intake of 100 g AU-1 would not greatly impact milk production but decrease urinary N excretion by 25% and ammonia and nitrous oxide emissions from manure by 18% to 30%. In Niger, compared to the practice of housing livestock and applying dung only onto fields, corralling cattle or sheep on cropland (to capture urinary N) increased millet yields by 25% to 95%. The additional millet grain due to dung applications or corralling would satisfy the annual food grain requirements of 2-5 persons; the additional forage would provide 120-300 more days of feed for a typical head of cattle; and 850 to 1600 kg ha-1 more biomass would be available for soil conservation. In Wisconsin, compared to application of barn manure only, corralling heifers in fields increased forage production by only 8% to 11%. The application of barn manure or corralling increased forage production by 20% to 70%. This additional forage would provide 350-580 more days of feed for a typical dairy heifer. Study results demonstrate how different approaches to feed and manure management in low-N-input and high-N-input dairy cattle systems impact milk production, manure N excretion, manure N capture, N recycling and environmental N loss.

  6. Optimal Control-Based Adaptive NN Design for a Class of Nonlinear Discrete-Time Block-Triangular Systems.

    PubMed

    Liu, Yan-Jun; Tong, Shaocheng

    2016-11-01

    In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.

  7. Scope for improved eco-efficiency varies among diverse cropping systems.

    PubMed

    Carberry, Peter S; Liang, Wei-li; Twomlow, Stephen; Holzworth, Dean P; Dimes, John P; McClelland, Tim; Huth, Neil I; Chen, Fu; Hochman, Zvi; Keating, Brian A

    2013-05-21

    Global food security requires eco-efficient agriculture to produce the required food and fiber products concomitant with ecologically efficient use of resources. This eco-efficiency concept is used to diagnose the state of agricultural production in China (irrigated wheat-maize double-cropping systems), Zimbabwe (rainfed maize systems), and Australia (rainfed wheat systems). More than 3,000 surveyed crop yields in these three countries were compared against simulated grain yields at farmer-specified levels of nitrogen (N) input. Many Australian commercial wheat farmers are both close to existing production frontiers and gain little prospective return from increasing their N input. Significant losses of N from their systems, either as nitrous oxide emissions or as nitrate leached from the soil profile, are infrequent and at low intensities relative to their level of grain production. These Australian farmers operate close to eco-efficient frontiers in regard to N, and so innovations in technologies and practices are essential to increasing their production without added economic or environmental risks. In contrast, many Chinese farmers can reduce N input without sacrificing production through more efficient use of their fertilizer input. In fact, there are real prospects for the double-cropping systems on the North China Plain to achieve both production increases and reduced environmental risks. Zimbabwean farmers have the opportunity for significant production increases by both improving their technical efficiency and increasing their level of input; however, doing so will require improved management expertise and greater access to institutional support for addressing the higher risks. This paper shows that pathways for achieving improved eco-efficiency will differ among diverse cropping systems.

  8. Scope for improved eco-efficiency varies among diverse cropping systems

    PubMed Central

    Carberry, Peter S.; Liang, Wei-li; Twomlow, Stephen; Holzworth, Dean P.; Dimes, John P.; McClelland, Tim; Huth, Neil I.; Chen, Fu; Hochman, Zvi; Keating, Brian A.

    2013-01-01

    Global food security requires eco-efficient agriculture to produce the required food and fiber products concomitant with ecologically efficient use of resources. This eco-efficiency concept is used to diagnose the state of agricultural production in China (irrigated wheat–maize double-cropping systems), Zimbabwe (rainfed maize systems), and Australia (rainfed wheat systems). More than 3,000 surveyed crop yields in these three countries were compared against simulated grain yields at farmer-specified levels of nitrogen (N) input. Many Australian commercial wheat farmers are both close to existing production frontiers and gain little prospective return from increasing their N input. Significant losses of N from their systems, either as nitrous oxide emissions or as nitrate leached from the soil profile, are infrequent and at low intensities relative to their level of grain production. These Australian farmers operate close to eco-efficient frontiers in regard to N, and so innovations in technologies and practices are essential to increasing their production without added economic or environmental risks. In contrast, many Chinese farmers can reduce N input without sacrificing production through more efficient use of their fertilizer input. In fact, there are real prospects for the double-cropping systems on the North China Plain to achieve both production increases and reduced environmental risks. Zimbabwean farmers have the opportunity for significant production increases by both improving their technical efficiency and increasing their level of input; however, doing so will require improved management expertise and greater access to institutional support for addressing the higher risks. This paper shows that pathways for achieving improved eco-efficiency will differ among diverse cropping systems. PMID:23671071

  9. The Impact of Input and Output Prices on The Household Economic Behavior of Rice-Livestock Integrated Farming System (Rlifs) and Non Rlifs Farmers

    NASA Astrophysics Data System (ADS)

    Lindawati, L.; Kusnadi, N.; Kuntjoro, S. U.; Swastika, D. K. S.

    2018-02-01

    Integrated farming system is a system that emphasized linkages and synergism of farming units waste utilization. The objective of the study was to analyze the impact of input and output prices on both Rice Livestock Integrated Farming System (RLIFS) and non RLIFS farmers. The study used econometric model in the form of a simultaneous equations system consisted of 36 equations (18 behavior and 18 identity equations). The impact of changes in some variables was obtained through simulation of input and output prices on simultaneous equations. The results showed that the price increasing of the seed, SP-36, urea, medication/vitamins, manure, bran, straw had negative impact on production of the rice, cow, manure, bran, straw and household income. The decrease in the rice and cow production, production input usage, allocation of family labor, rice and cow business income was greater in RLIFS than non RLIFS farmers. The impact of rising rice and cow cattle prices in the two groups of farmers was not too much different because (1) farming waste wasn’t used effectively (2) manure and straw had small proportion of production costs. The increase of input and output price didn’t have impact on production costs and household expenditures on RLIFS.

  10. Growth and yield model application in tropical rain forest management

    Treesearch

    James Atta-Boateng; John W., Jr. Moser

    2000-01-01

    Analytical tools are needed to evaluate the impact of management policies on the sustainable use of rain forest. Optimal decisions concerning the level of management inputs require accurate predictions of output at all relevant input levels. Using growth data from 40 l-hectare permanent plots obtained from the semi-deciduous forest of Ghana, a system of 77 differential...

  11. Stiffness modeling of compliant parallel mechanisms and applications in the performance analysis of a decoupled parallel compliant stage

    NASA Astrophysics Data System (ADS)

    Jiang, Yao; Li, Tie-Min; Wang, Li-Ping

    2015-09-01

    This paper investigates the stiffness modeling of compliant parallel mechanism (CPM) based on the matrix method. First, the general compliance matrix of a serial flexure chain is derived. The stiffness modeling of CPMs is next discussed in detail, considering the relative positions of the applied load and the selected displacement output point. The derived stiffness models have simple and explicit forms, and the input, output, and coupling stiffness matrices of the CPM can easily be obtained. The proposed analytical model is applied to the stiffness modeling and performance analysis of an XY parallel compliant stage with input and output decoupling characteristics. Then, the key geometrical parameters of the stage are optimized to obtain the minimum input decoupling degree. Finally, a prototype of the compliant stage is developed and its input axial stiffness, coupling characteristics, positioning resolution, and circular contouring performance are tested. The results demonstrate the excellent performance of the compliant stage and verify the effectiveness of the proposed theoretical model. The general stiffness models provided in this paper will be helpful for performance analysis, especially in determining coupling characteristics, and the structure optimization of the CPM.

  12. A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.

    PubMed

    Leibfried, Felix; Braun, Daniel A

    2015-08-01

    Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.

  13. Estimation of the longitudinal and lateral-directional aerodynamic parameters from flight data for the NASA F/A-18 HARV

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1996-01-01

    This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.

  14. Parameters optimization of laser brazing in crimping butt using Taguchi and BPNN-GA

    NASA Astrophysics Data System (ADS)

    Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu

    2015-04-01

    The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by Taguchi L25 orthogonal array through the statistical design method. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid method, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ide, Toshiki; Hofmann, Holger F.; JST-CREST, Graduate School of Advanced Sciences of Matter, Hiroshima University, Kagamiyama 1-3-1, Higashi Hiroshima 739-8530

    The information encoded in the polarization of a single photon can be transferred to a remote location by two-channel continuous-variable quantum teleportation. However, the finite entanglement used in the teleportation causes random changes in photon number. If more than one photon appears in the output, the continuous-variable teleportation accidentally produces clones of the original input photon. In this paper, we derive the polarization statistics of the N-photon output components and show that they can be decomposed into an optimal cloning term and completely unpolarized noise. We find that the accidental cloning of the input photon is nearly optimal at experimentallymore » feasible squeezing levels, indicating that the loss of polarization information is partially compensated by the availability of clones.« less

  16. Selection of sampling rate for digital control of aircrafts

    NASA Technical Reports Server (NTRS)

    Katz, P.; Powell, J. D.

    1974-01-01

    The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.

  17. Validation of a new modal performance measure for flexible controllers design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simo, J.B.; Tahan, S.A.; Kamwa, I.

    1996-05-01

    A new modal performance measure for power system stabilizer (PSS) optimization is proposed in this paper. The new method is based on modifying the square envelopes of oscillating modes, in order to take into account their damping ratios while minimizing the performance index. This criteria is applied to flexible controllers optimal design, on a multi-input-multi-output (MIMO) reduced-order model of a prototype power system. The multivariable model includes four generators, each having one input and one output. Linear time-response simulation and transient stability analysis with a nonlinear package confirm the superiority of the proposed criteria and illustrate its effectiveness in decentralizedmore » control.« less

  18. Stabilization of model-based networked control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda, Francisco; Instituto Politécnico de Viana do Castelo, Viana do Castelo; Abreu, Carlos

    2016-06-08

    A class of networked control systems called Model-Based Networked Control Systems (MB-NCSs) is considered. Stabilization of MB-NCSs is studied using feedback controls and simulation of stabilization for different feedbacks is made with the purpose to reduce the network trafic. The feedback control input is applied in a compensated model of the plant that approximates the plant dynamics and stabilizes the plant even under slow network conditions. Conditions for global exponential stabilizability and for the choosing of a feedback control input for a given constant time between the information moments of the network are derived. An optimal control problem to obtainmore » an optimal feedback control is also presented.« less

  19. The effects of particle swarm optimization algorithm on volume ignition gain of Proton-Lithium (7) pellets

    NASA Astrophysics Data System (ADS)

    Livari, As. Ali; Malekynia, B.; Livari, Ak. A.; Khoda-Bakhsh, R.

    2017-11-01

    When it was found out that the ignition of nuclear fusion hinges upon input energy laser, the efforts in order to make giant lasers began, and energy gains of DT fuel were obtained. But due to the neutrons generation and emitted radioactivity from DT reaction, gains of fuels like Proton-Lithium (7) were also adverted. Therefore, making larger and powerful lasers was followed. Here, using new versions of particle swarm optimization algorithm, it will be shown that available maximum gain of Proton-Lithium (7) is reached only at energies about 1014 J, and not only the highest input energy is not helpful but the efficiency is also decreased.

  20. New multirate sampled-data control law structure and synthesis algorithm

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng

    1992-01-01

    A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.

Top