Sample records for predicting real optimized

  1. Is optimism real?

    PubMed

    Simmons, Joseph P; Massey, Cade

    2012-11-01

    Is optimism real, or are optimistic forecasts just cheap talk? To help answer this question, we investigated whether optimistic predictions persist in the face of large incentives to be accurate. We asked National Football League football fans to predict the winner of a single game. Roughly half (the partisans) predicted a game involving their favorite team, and the other half (the neutrals) predicted a game involving 2 teams they were neutral about. Participants were promised either a small incentive ($5) or a large incentive ($50) for correctly predicting the game's winner. Optimism emerged even when incentives were large, as partisans were much more likely than neutrals to predict partisans' favorite teams to win. Strong optimism also emerged among participants whose responses to follow-up questions strongly suggested that they believed the predictions they made. This research supports the claim that optimism is real. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  2. Is Optimism Real?

    ERIC Educational Resources Information Center

    Simmons, Joseph P.; Massey, Cade

    2012-01-01

    Is optimism real, or are optimistic forecasts just cheap talk? To help answer this question, we investigated whether optimistic predictions persist in the face of large incentives to be accurate. We asked National Football League football fans to predict the winner of a single game. Roughly half (the partisans) predicted a game involving their…

  3. Computational optimization and biological evolution.

    PubMed

    Goryanin, Igor

    2010-10-01

    Modelling and optimization principles become a key concept in many biological areas, especially in biochemistry. Definitions of objective function, fitness and co-evolution, although they differ between biology and mathematics, are similar in a general sense. Although successful in fitting models to experimental data, and some biochemical predictions, optimization and evolutionary computations should be developed further to make more accurate real-life predictions, and deal not only with one organism in isolation, but also with communities of symbiotic and competing organisms. One of the future goals will be to explain and predict evolution not only for organisms in shake flasks or fermenters, but for real competitive multispecies environments.

  4. Predictive optimal control of sewer networks using CORAL tool: application to Riera Blanca catchment in Barcelona.

    PubMed

    Puig, V; Cembrano, G; Romera, J; Quevedo, J; Aznar, B; Ramón, G; Cabot, J

    2009-01-01

    This paper deals with the global control of the Riera Blanca catchment in the Barcelona sewer network using a predictive optimal control approach. This catchment has been modelled using a conceptual modelling approach based on decomposing the catchments in subcatchments and representing them as virtual tanks. This conceptual modelling approach allows real-time model calibration and control of the sewer network. The global control problem of the Riera Blanca catchment is solved using a optimal/predictive control algorithm. To implement the predictive optimal control of the Riera Blanca catchment, a software tool named CORAL is used. The on-line control is simulated by interfacing CORAL with a high fidelity simulator of sewer networks (MOUSE). CORAL interchanges readings from the limnimeters and gate commands with MOUSE as if it was connected with the real SCADA system. Finally, the global control results obtained using the predictive optimal control are presented and compared against the results obtained using current local control system. The results obtained using the global control are very satisfactory compared to those obtained using the local control.

  5. Development of Predictive Energy Management Strategies for Hybrid Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Baker, David

    Studies have shown that obtaining and utilizing information about the future state of vehicles can improve vehicle fuel economy (FE). However, there has been a lack of research into the impact of real-world prediction error on FE improvements, and whether near-term technologies can be utilized to improve FE. This study seeks to research the effect of prediction error on FE. First, a speed prediction method is developed, and trained with real-world driving data gathered only from the subject vehicle (a local data collection method). This speed prediction method informs a predictive powertrain controller to determine the optimal engine operation for various prediction durations. The optimal engine operation is input into a high-fidelity model of the FE of a Toyota Prius. A tradeoff analysis between prediction duration and prediction fidelity was completed to determine what duration of prediction resulted in the largest FE improvement. Results demonstrate that 60-90 second predictions resulted in the highest FE improvement over the baseline, achieving up to a 4.8% FE increase. A second speed prediction method utilizing simulated vehicle-to-vehicle (V2V) communication was developed to understand if incorporating near-term technologies could be utilized to further improve prediction fidelity. This prediction method produced lower variation in speed prediction error, and was able to realize a larger FE improvement over the local prediction method for longer prediction durations, achieving up to 6% FE improvement. This study concludes that speed prediction and prediction-informed optimal vehicle energy management can produce FE improvements with real-world prediction error and drive cycle variability, as up to 85% of the FE benefit of perfect speed prediction was achieved with the proposed prediction methods.

  6. Real coded genetic algorithm for fuzzy time series prediction

    NASA Astrophysics Data System (ADS)

    Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.

    2017-10-01

    Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.

  7. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    NASA Astrophysics Data System (ADS)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  8. Optimization of global model composed of radial basis functions using the term-ranking approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun

    2014-03-15

    A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.

  9. Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision

    NASA Astrophysics Data System (ADS)

    Hendrawan, Y.; Hawa, L. C.; Damayanti, R.

    2018-03-01

    This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.

  10. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  11. Optimal Reservoir Operation using Stochastic Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Sahu, R.; McLaughlin, D.

    2016-12-01

    Hydropower operations are typically designed to fulfill contracts negotiated with consumers who need reliable energy supplies, despite uncertainties in reservoir inflows. In addition to providing reliable power the reservoir operator needs to take into account environmental factors such as downstream flooding or compliance with minimum flow requirements. From a dynamical systems perspective, the reservoir operating strategy must cope with conflicting objectives in the presence of random disturbances. In order to achieve optimal performance, the reservoir system needs to continually adapt to disturbances in real time. Model Predictive Control (MPC) is a real-time control technique that adapts by deriving the reservoir release at each decision time from the current state of the system. Here an ensemble-based version of MPC (SMPC) is applied to a generic reservoir to determine both the optimal power contract, considering future inflow uncertainty, and a real-time operating strategy that attempts to satisfy the contract. Contract selection and real-time operation are coupled in an optimization framework that also defines a Pareto trade off between the revenue generated from energy production and the environmental damage resulting from uncontrolled reservoir spills. Further insight is provided by a sensitivity analysis of key parameters specified in the SMPC technique. The results demonstrate that SMPC is suitable for multi-objective planning and associated real-time operation of a wide range of hydropower reservoir systems.

  12. Using string invariants for prediction searching for optimal parameters

    NASA Astrophysics Data System (ADS)

    Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard

    2016-02-01

    We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.

  13. Real Time Optimal Control of Supercapacitor Operation for Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish

    2016-07-01

    Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance tomore » the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.« less

  14. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  15. Computer program to minimize prediction error in models from experiments with 16 hypercube points and 0 to 6 center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1982-01-01

    A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.

  16. Near Real-Time Optimal Prediction of Adverse Events in Aviation Data

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander; Das, Santanu

    2010-01-01

    The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we demonstrate how to recast the anomaly prediction problem into a form whose solution is accessible as a level-crossing prediction problem. The level-crossing prediction problem has an elegant, optimal, yet untested solution under certain technical constraints, and only when the appropriate modeling assumptions are made. As such, we will thoroughly investigate the resilience of these modeling assumptions, and show how they affect final performance. Finally, the predictive capability of this method will be assessed by quantitative means, using both validation and test data containing anomalies or adverse events from real aviation data sets that have previously been identified as operationally significant by domain experts. It will be shown that the formulation proposed yields a lower false alarm rate on average than competing methods based on similarly advanced concepts, and a higher correct detection rate than a standard method based upon exceedances that is commonly used for prediction.

  17. Real time optimal guidance of low-thrust spacecraft: an application of nonlinear model predictive control.

    PubMed

    Arrieta-Camacho, Juan José; Biegler, Lorenz T

    2005-12-01

    Real time optimal guidance is considered for a class of low thrust spacecraft. In particular, nonlinear model predictive control (NMPC) is utilized for computing the optimal control actions required to transfer a spacecraft from a low Earth orbit to a mission orbit. The NMPC methodology presented is able to cope with unmodeled disturbances. The dynamics of the transfer are modeled using a set of modified equinoctial elements because they do not exhibit singularities for zero inclination and zero eccentricity. The idea behind NMPC is the repeated solution of optimal control problems; at each time step, a new control action is computed. The optimal control problem is solved using a direct method-fully discretizing the equations of motion. The large scale nonlinear program resulting from the discretization procedure is solved using IPOPT--a primal-dual interior point algorithm. Stability and robustness characteristics of the NMPC algorithm are reviewed. A numerical example is presented that encourages further development of the proposed methodology: the transfer from low-Earth orbit to a molniya orbit.

  18. Artificial neural networks as alternative tool for minimizing error predictions in manufacturing ultradeformable nanoliposome formulations.

    PubMed

    León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa

    2018-01-01

    This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.

  19. How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    PubMed

    Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang

    2018-08-01

    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. 'It is Time to Prepare the Next patient' Real-Time Prediction of Procedure Duration in Laparoscopic Cholecystectomies.

    PubMed

    Guédon, Annetje C P; Paalvast, M; Meeuwsen, F C; Tax, D M J; van Dijke, A P; Wauben, L S G L; van der Elst, M; Dankelman, J; van den Dobbelsteen, J J

    2016-12-01

    Operating Room (OR) scheduling is crucial to allow efficient use of ORs. Currently, the predicted durations of surgical procedures are unreliable and the OR schedulers have to follow the progress of the procedures in order to update the daily planning accordingly. The OR schedulers often acquire the needed information through verbal communication with the OR staff, which causes undesired interruptions of the surgical process. The aim of this study was to develop a system that predicts in real-time the remaining procedure duration and to test this prediction system for reliability and usability in an OR. The prediction system was based on the activation pattern of one single piece of equipment, the electrosurgical device. The prediction system was tested during 21 laparoscopic cholecystectomies, in which the activation of the electrosurgical device was recorded and processed in real-time using pattern recognition methods. The remaining surgical procedure duration was estimated and the optimal timing to prepare the next patient for surgery was communicated to the OR staff. The mean absolute error was smaller for the prediction system (14 min) than for the OR staff (19 min). The OR staff doubted whether the prediction system could take all relevant factors into account but were positive about its potential to shorten waiting times for patients. The prediction system is a promising tool to automatically and objectively predict the remaining procedure duration, and thereby achieve optimal OR scheduling and streamline the patient flow from the nursing department to the OR.

  1. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  2. Local sharpening and subspace wavefront correction with predictive dynamic digital holography

    NASA Astrophysics Data System (ADS)

    Sulaiman, Sennan; Gibson, Steve

    2017-09-01

    Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.

  3. Real estate value prediction using multivariate regression models

    NASA Astrophysics Data System (ADS)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  4. Real data assimilation for optimization of frictional parameters and prediction of afterslip in the 2003 Tokachi-oki earthquake inferred from slip velocity by an adjoint method

    NASA Astrophysics Data System (ADS)

    Kano, Masayuki; Miyazaki, Shin'ichi; Ishikawa, Yoichi; Hiyoshi, Yoshihisa; Ito, Kosuke; Hirahara, Kazuro

    2015-10-01

    Data assimilation is a technique that optimizes the parameters used in a numerical model with a constraint of model dynamics achieving the better fit to observations. Optimized parameters can be utilized for the subsequent prediction with a numerical model and predicted physical variables are presumably closer to observations that will be available in the future, at least, comparing to those obtained without the optimization through data assimilation. In this work, an adjoint data assimilation system is developed for optimizing a relatively large number of spatially inhomogeneous frictional parameters during the afterslip period in which the physical constraints are a quasi-dynamic equation of motion and a laboratory derived rate and state dependent friction law that describe the temporal evolution of slip velocity at subduction zones. The observed variable is estimated slip velocity on the plate interface. Before applying this method to the real data assimilation for the afterslip of the 2003 Tokachi-oki earthquake, a synthetic data assimilation experiment is conducted to examine the feasibility of optimizing the frictional parameters in the afterslip area. It is confirmed that the current system is capable of optimizing the frictional parameters A-B, A and L by adopting the physical constraint based on a numerical model if observations capture the acceleration and decaying phases of slip on the plate interface. On the other hand, it is unlikely to constrain the frictional parameters in the region where the amplitude of afterslip is less than 1.0 cm d-1. Next, real data assimilation for the 2003 Tokachi-oki earthquake is conducted to incorporate slip velocity data inferred from time dependent inversion of Global Navigation Satellite System time-series. The optimized values of A-B, A and L are O(10 kPa), O(102 kPa) and O(10 mm), respectively. The optimized frictional parameters yield the better fit to the observations and the better prediction skill of slip velocity afterwards. Also, further experiment shows the importance of employing a fine-mesh model. It will contribute to the further understanding of the frictional properties on plate interfaces and lead to the forecasting system that provides useful information on the possibility of consequent earthquakes.

  5. Reducing usage of the computational resources by event driven approach to model predictive control

    NASA Astrophysics Data System (ADS)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  6. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  7. Optimal Weight Assignment for a Chinese Signature File.

    ERIC Educational Resources Information Center

    Liang, Tyne; And Others

    1996-01-01

    Investigates the performance of a character-based Chinese text retrieval scheme in which monogram keys and bigram keys are encoded into document signatures. Tests and verifies the theoretical predictions of the optimal weight assignments and the minimal false hit rate in experiments using a real Chinese corpus for disyllabic queries of different…

  8. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  9. Optimization and real-time control for laser treatment of heterogeneous soft tissues.

    PubMed

    Feng, Yusheng; Fuentes, David; Hawkins, Andrea; Bass, Jon M; Rylander, Marissa Nichole

    2009-01-01

    Predicting the outcome of thermotherapies in cancer treatment requires an accurate characterization of the bioheat transfer processes in soft tissues. Due to the biological and structural complexity of tumor (soft tissue) composition and vasculature, it is often very difficult to obtain reliable tissue properties that is one of the key factors for the accurate treatment outcome prediction. Efficient algorithms employing in vivo thermal measurements to determine heterogeneous thermal tissues properties in conjunction with a detailed sensitivity analysis can produce essential information for model development and optimal control. The goals of this paper are to present a general formulation of the bioheat transfer equation for heterogeneous soft tissues, review models and algorithms developed for cell damage, heat shock proteins, and soft tissues with nanoparticle inclusion, and demonstrate an overall computational strategy for developing a laser treatment framework with the ability to perform real-time robust calibrations and optimal control. This computational strategy can be applied to other thermotherapies using the heat source such as radio frequency or high intensity focused ultrasound.

  10. Optimization of the resources management in fighting wildfires.

    PubMed

    Martin-Fernández, Susana; Martínez-Falero, Eugenio; Pérez-González, J Manuel

    2002-09-01

    Wildfires lead to important economic, social, and environmental losses, especially in areas of Mediterranean climate where they are of a high intensity and frequency. Over the past 30 years there has been a dramatic surge in the development and use of fire spread models. However, given the chaotic nature of environmental systems, it is very difficult to develop real-time fire-extinguishing models. This article proposes a method of optimizing the performance of wildfire fighting resources such that losses are kept to a minimum. The optimization procedure includes discrete simulation algorithms and Bayesian optimization methods for discrete and continuous problems (simulated annealing and Bayesian global optimization). Fast calculus algorithms are applied to provide optimization outcomes in short periods of time such that the predictions of the model and the real behavior of the fire, combat resources, and meteorological conditions are similar. In addition, adaptive algorithms take into account the chaotic behavior of wildfire so that the system can be updated with data corresponding to the real situation to obtain a new optimum solution. The application of this method to the Northwest Forest of Madrid (Spain) is also described. This application allowed us to check that it is a helpful tool in the decision-making process.

  11. Optimization of the Resources Management in Fighting Wildfires

    NASA Astrophysics Data System (ADS)

    Martin-Fernández, Susana; Martínez-Falero, Eugenio; Pérez-González, J. Manuel

    2002-09-01

    Wildfires lead to important economic, social, and environmental losses, especially in areas of Mediterranean climate where they are of a high intensity and frequency. Over the past 30 years there has been a dramatic surge in the development and use of fire spread models. However, given the chaotic nature of environmental systems, it is very difficult to develop real-time fire-extinguishing models. This article proposes a method of optimizing the performance of wildfire fighting resources such that losses are kept to a minimum. The optimization procedure includes discrete simulation algorithms and Bayesian optimization methods for discrete and continuous problems (simulated annealing and Bayesian global optimization). Fast calculus algorithms are applied to provide optimization outcomes in short periods of time such that the predictions of the model and the real behavior of the fire, combat resources, and meteorological conditions are similar. In addition, adaptive algorithms take into account the chaotic behavior of wildfire so that the system can be updated with data corresponding to the real situation to obtain a new optimum solution. The application of this method to the Northwest Forest of Madrid (Spain) is also described. This application allowed us to check that it is a helpful tool in the decision-making process.

  12. Bi-objective integer programming for RNA secondary structure prediction with pseudoknots.

    PubMed

    Legendre, Audrey; Angel, Eric; Tahi, Fariza

    2018-01-15

    RNA structure prediction is an important field in bioinformatics, and numerous methods and tools have been proposed. Pseudoknots are specific motifs of RNA secondary structures that are difficult to predict. Almost all existing methods are based on a single model and return one solution, often missing the real structure. An alternative approach would be to combine different models and return a (small) set of solutions, maximizing its quality and diversity in order to increase the probability that it contains the real structure. We propose here an original method for predicting RNA secondary structures with pseudoknots, based on integer programming. We developed a generic bi-objective integer programming algorithm allowing to return optimal and sub-optimal solutions optimizing simultaneously two models. This algorithm was then applied to the combination of two known models of RNA secondary structure prediction, namely MEA and MFE. The resulting tool, called BiokoP, is compared with the other methods in the literature. The results show that the best solution (structure with the highest F 1 -score) is, in most cases, given by BiokoP. Moreover, the results of BiokoP are homogeneous, regardless of the pseudoknot type or the presence or not of pseudoknots. Indeed, the F 1 -scores are always higher than 70% for any number of solutions returned. The results obtained by BiokoP show that combining the MEA and the MFE models, as well as returning several optimal and several sub-optimal solutions, allow to improve the prediction of secondary structures. One perspective of our work is to combine better mono-criterion models, in particular to combine a model based on the comparative approach with the MEA and the MFE models. This leads to develop in the future a new multi-objective algorithm to combine more than two models. BiokoP is available on the EvryRNA platform: https://EvryRNA.ibisc.univ-evry.fr .

  13. Smart EV Energy Management System to Support Grid Services

    NASA Astrophysics Data System (ADS)

    Wang, Bin

    Under smart grid scenarios, the advanced sensing and metering technologies have been applied to the legacy power grid to improve the system observability and the real-time situational awareness. Meanwhile, there is increasing amount of distributed energy resources (DERs), such as renewable generations, electric vehicles (EVs) and battery energy storage system (BESS), etc., being integrated into the power system. However, the integration of EVs, which can be modeled as controllable mobile energy devices, brings both challenges and opportunities to the grid planning and energy management, due to the intermittency of renewable generation, uncertainties of EV driver behaviors, etc. This dissertation aims to solve the real-time EV energy management problem in order to improve the overall grid efficiency, reliability and economics, using online and predictive optimization strategies. Most of the previous research on EV energy management strategies and algorithms are based on simplified models with unrealistic assumptions that the EV charging behaviors are perfectly known or following known distributions, such as the arriving time, leaving time and energy consumption values, etc. These approaches fail to obtain the optimal solutions in real-time because of the system uncertainties. Moreover, there is lack of data-driven strategy that performs online and predictive scheduling for EV charging behaviors under microgrid scenarios. Therefore, we develop an online predictive EV scheduling framework, considering uncertainties of renewable generation, building load and EV driver behaviors, etc., based on real-world data. A kernel-based estimator is developed to predict the charging session parameters in real-time with improved estimation accuracy. The efficacy of various optimization strategies that are supported by this framework, including valley-filling, cost reduction, event-based control, etc., has been demonstrated. In addition, the existing simulation-based approaches do not consider a variety of practical concerns of implementing such a smart EV energy management system, including the driver preferences, communication protocols, data models, and customized integration of existing standards to provide grid services. Therefore, this dissertation also solves these issues by designing and implementing a scalable system architecture to capture the user preferences, enable multi-layer communication and control, and finally improve the system reliability and interoperability.

  14. Real-World Application of Robust Design Optimization Assisted by Response Surface Approximation and Visual Data-Mining

    NASA Astrophysics Data System (ADS)

    Shimoyama, Koji; Jeong, Shinkyu; Obayashi, Shigeru

    A new approach for multi-objective robust design optimization was proposed and applied to a real-world design problem with a large number of objective functions. The present approach is assisted by response surface approximation and visual data-mining, and resulted in two major gains regarding computational time and data interpretation. The Kriging model for response surface approximation can markedly reduce the computational time for predictions of robustness. In addition, the use of self-organizing maps as a data-mining technique allows visualization of complicated design information between optimality and robustness in a comprehensible two-dimensional form. Therefore, the extraction and interpretation of trade-off relations between optimality and robustness of design, and also the location of sweet spots in the design space, can be performed in a comprehensive manner.

  15. Popularity versus similarity in growing networks.

    PubMed

    Papadopoulos, Fragkiskos; Kitsak, Maksim; Serrano, M Ángeles; Boguñá, Marián; Krioukov, Dmitri

    2012-09-27

    The principle that 'popularity is attractive' underlies preferential attachment, which is a common explanation for the emergence of scaling in growing networks. If new connections are made preferentially to more popular nodes, then the resulting distribution of the number of connections possessed by nodes follows power laws, as observed in many real networks. Preferential attachment has been directly validated for some real networks (including the Internet), and can be a consequence of different underlying processes based on node fitness, ranking, optimization, random walks or duplication. Here we show that popularity is just one dimension of attractiveness; another dimension is similarity. We develop a framework in which new connections optimize certain trade-offs between popularity and similarity, instead of simply preferring popular nodes. The framework has a geometric interpretation in which popularity preference emerges from local optimization. As opposed to preferential attachment, our optimization framework accurately describes the large-scale evolution of technological (the Internet), social (trust relationships between people) and biological (Escherichia coli metabolic) networks, predicting the probability of new links with high precision. The framework that we have developed can thus be used for predicting new links in evolving networks, and provides a different perspective on preferential attachment as an emergent phenomenon.

  16. Predictive Scheduling for Electric Vehicles Considering Uncertainty of Load and User Behaviors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bin; Huang, Rui; Wang, Yubo

    2016-05-02

    Un-coordinated Electric Vehicle (EV) charging can create unexpected load in local distribution grid, which may degrade the power quality and system reliability. The uncertainty of EV load, user behaviors and other baseload in distribution grid, is one of challenges that impedes optimal control for EV charging problem. Previous researches did not fully solve this problem due to lack of real-world EV charging data and proper stochastic model to describe these behaviors. In this paper, we propose a new predictive EV scheduling algorithm (PESA) inspired by Model Predictive Control (MPC), which includes a dynamic load estimation module and a predictive optimizationmore » module. The user-related EV load and base load are dynamically estimated based on the historical data. At each time interval, the predictive optimization program will be computed for optimal schedules given the estimated parameters. Only the first element from the algorithm outputs will be implemented according to MPC paradigm. Current-multiplexing function in each Electric Vehicle Supply Equipment (EVSE) is considered and accordingly a virtual load is modeled to handle the uncertainties of future EV energy demands. This system is validated by the real-world EV charging data collected on UCLA campus and the experimental results indicate that our proposed model not only reduces load variation up to 40% but also maintains a high level of robustness. Finally, IEC 61850 standard is utilized to standardize the data models involved, which brings significance to more reliable and large-scale implementation.« less

  17. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  18. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  19. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 datamore » points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.« less

  20. Model Predictive Control application for real time operation of controlled structures for the Water Authority Noorderzijlvest, The Netherlands

    NASA Astrophysics Data System (ADS)

    van Heeringen, Klaas-Jan; Gooijer, Jan; Knot, Floris; Talsma, Jan

    2015-04-01

    In the Netherlands, flood protection has always been a key issue to protect settlements against storm surges and riverine floods. Whereas flood protection traditionally focused on structural measures, nowadays the availability of meteorological and hydrological forecasts enable the application of more advanced real-time control techniques for operating the existing hydraulic infrastructure in an anticipatory and more efficient way. Model Predictive Control (MPC) is a powerful technique to derive optimal control variables with the help of model based predictions evaluated against a control objective. In a project for the regional water authority Noorderzijlvest in the north of the Netherlands, it has been shown that MPC can increase the safety level of the system during flood events by an anticipatory pre-release of water. Furthermore, energy costs of pumps can be reduced by making tactical use of the water storage and shifting pump activities during normal operating conditions to off-peak hours. In this way cheap energy is used in combination of gravity flow through gates during low tide periods. MPC has now been implemented for daily operational use of the whole water system of the water authority Noorderzijlvest. The system developed to a real time decision support system which not only supports the daily operation but is able to directly implement the optimal control settings at the structures. We explain how we set-up and calibrated a prediction model (RTC-Tools) that is accurate and fast enough for optimization purposes, and how we integrated it in the operational flood early warning system (Delft-FEWS). Beside the prediction model, the weights and the factors of the objective function are an important element of MPC, since they shape the control objective. We developed special features in Delft-FEWS to allow the operators to adjust the objective function in order to meet changing requirements and to evaluate different control strategies.

  1. Novel hyperspectral prediction method and apparatus

    NASA Astrophysics Data System (ADS)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  2. Adaptive two-degree-of-freedom PI for speed control of permanent magnet synchronous motor based on fractional order GPC.

    PubMed

    Qiao, Wenjun; Tang, Xiaoqi; Zheng, Shiqi; Xie, Yuanlong; Song, Bao

    2016-09-01

    In this paper, an adaptive two-degree-of-freedom (2Dof) proportional-integral (PI) controller is proposed for the speed control of permanent magnet synchronous motor (PMSM). Firstly, an enhanced just-in-time learning technique consisting of two novel searching engines is presented to identify the model of the speed control system in a real-time manner. Secondly, a general formula is given to predict the future speed reference which is unavailable at the interval of two bus-communication cycles. Thirdly, the fractional order generalized predictive control (FOGPC) is introduced to improve the control performance of the servo drive system. Based on the identified model parameters and predicted speed reference, the optimal control law of FOGPC is derived. Finally, the designed 2Dof PI controller is auto-tuned by matching with the optimal control law. Simulations and real-time experimental results on the servo drive system of PMSM are provided to illustrate the effectiveness of the proposed strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Diffusion algorithms and data reduction routine for onsite real-time launch predictions for the transport of Delta-Thor exhaust effluents

    NASA Technical Reports Server (NTRS)

    Stephens, J. B.

    1976-01-01

    The National Aeronautics and Space Administration/Marshall Space Flight Center multilayer diffusion algorithms have been specialized for the prediction of the surface impact for the dispersive transport of the exhaust effluents from the launch of a Delta-Thor vehicle. This specialization permits these transport predictions to be made at the launch range in real time so that the effluent monitoring teams can optimize their monitoring grids. Basically, the data reduction routine requires only the meteorology profiles for the thermodynamics and kinematics of the atmosphere as an input. These profiles are graphed along with the resulting exhaust cloud rise history, the centerline concentrations and dosages, and the hydrogen chloride isopleths.

  4. Measuring the value of accurate link prediction for network seeding.

    PubMed

    Wei, Yijin; Spencer, Gwen

    2017-01-01

    The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.

  5. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  6. Visual anticipation biases conscious decision making but not bottom-up visual processing.

    PubMed

    Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F M J

    2014-01-01

    Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself.

  7. Support Vector Machines for Differential Prediction

    PubMed Central

    Kuusisto, Finn; Santos Costa, Vitor; Nassif, Houssam; Burnside, Elizabeth; Page, David; Shavlik, Jude

    2015-01-01

    Machine learning is continually being applied to a growing set of fields, including the social sciences, business, and medicine. Some fields present problems that are not easily addressed using standard machine learning approaches and, in particular, there is growing interest in differential prediction. In this type of task we are interested in producing a classifier that specifically characterizes a subgroup of interest by maximizing the difference in predictive performance for some outcome between subgroups in a population. We discuss adapting maximum margin classifiers for differential prediction. We first introduce multiple approaches that do not affect the key properties of maximum margin classifiers, but which also do not directly attempt to optimize a standard measure of differential prediction. We next propose a model that directly optimizes a standard measure in this field, the uplift measure. We evaluate our models on real data from two medical applications and show excellent results. PMID:26158123

  8. Support Vector Machines for Differential Prediction.

    PubMed

    Kuusisto, Finn; Santos Costa, Vitor; Nassif, Houssam; Burnside, Elizabeth; Page, David; Shavlik, Jude

    Machine learning is continually being applied to a growing set of fields, including the social sciences, business, and medicine. Some fields present problems that are not easily addressed using standard machine learning approaches and, in particular, there is growing interest in differential prediction . In this type of task we are interested in producing a classifier that specifically characterizes a subgroup of interest by maximizing the difference in predictive performance for some outcome between subgroups in a population. We discuss adapting maximum margin classifiers for differential prediction. We first introduce multiple approaches that do not affect the key properties of maximum margin classifiers, but which also do not directly attempt to optimize a standard measure of differential prediction. We next propose a model that directly optimizes a standard measure in this field, the uplift measure. We evaluate our models on real data from two medical applications and show excellent results.

  9. Robust model predictive control for satellite formation keeping with eccentricity/inclination vector separation

    NASA Astrophysics Data System (ADS)

    Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong

    2018-05-01

    This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.

  10. Tire Changes, Fresh Air, and Yellow Flags: Challenges in Predictive Analytics for Professional Racing.

    PubMed

    Tulabandhula, Theja; Rudin, Cynthia

    2014-06-01

    Our goal is to design a prediction and decision system for real-time use during a professional car race. In designing a knowledge discovery process for racing, we faced several challenges that were overcome only when domain knowledge of racing was carefully infused within statistical modeling techniques. In this article, we describe how we leveraged expert knowledge of the domain to produce a real-time decision system for tire changes within a race. Our forecasts have the potential to impact how racing teams can optimize strategy by making tire-change decisions to benefit their rank position. Our work significantly expands previous research on sports analytics, as it is the only work on analytical methods for within-race prediction and decision making for professional car racing.

  11. Neural Generalized Predictive Control: A Newton-Raphson Implementation

    NASA Technical Reports Server (NTRS)

    Soloway, Donald; Haley, Pamela J.

    1997-01-01

    An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.

  12. A Fast Neural Network Approach to Predict Lung Tumor Motion during Respiration for Radiation Therapy Applications

    PubMed Central

    Slama, Matous; Benes, Peter M.; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time. PMID:25893194

  13. A fast neural network approach to predict lung tumor motion during respiration for radiation therapy applications.

    PubMed

    Bukovsky, Ivo; Homma, Noriyasu; Ichiji, Kei; Cejnek, Matous; Slama, Matous; Benes, Peter M; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time.

  14. Real-time parameter optimization based on neural network for smart injection molding

    NASA Astrophysics Data System (ADS)

    Lee, H.; Liau, Y.; Ryu, K.

    2018-03-01

    The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.

  15. Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines

    NASA Astrophysics Data System (ADS)

    Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian

    2016-11-01

    Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.

  16. Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Smith, Mark S.

    2010-01-01

    Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.

  17. An assessment of Gallistel's (2012) rationalistic account of extinction phenomena.

    PubMed

    Miller, Ralph R

    2012-05-01

    Gallistel (2012) asserts that animals use rationalistic reasoning (i.e., information theory and Bayesian inference) to make decisions that underlie select extinction phenomena. Rational processes are presumed to lead to evolutionarily optimal behavior. Thus, Gallistel's model is a type of optimality theory. But optimality theory is only a theory, a theory about an ideal organism, and its predictions frequently deviate appreciably from observed behavior of animals in the laboratory and the real world. That is, behavior of animals is often far from optimal, as is evident in many behavioral phenomena. Hence, appeals to optimality theory to explain, rather than illuminate, actual behavior are misguided. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Mechanisms of deterioration of nutrients. [retention of flavor during freeze drying

    NASA Technical Reports Server (NTRS)

    Karel, M.; Flink, J. M.

    1975-01-01

    The retention of flavor during freeze drying was studied with model systems. Mechanisms by which flavor retention phenomena is explained were developed and process conditions specified so that flavor retention is optimized. The literature is reviewed and results of studies of the flavor retention behavior of a number of real food products, including both liquid and solid foods are evaluated. Process parameters predicted by the mechanisms to be of greatest significance are freezing rate, initial solids content, and conditions which result in maintenance of sample structure. Flavor quality for the real food showed the same behavior relative to process conditions as predicted by the mechanisms based on model system studies.

  19. Optimal trading from minimizing the period of bankruptcy risk

    NASA Astrophysics Data System (ADS)

    Liehr, S.; Pawelzik, K.

    2001-04-01

    Assuming that financial markets behave similar to random walk processes we derive a trading strategy with variable investment which is based on the equivalence of the period of bankruptcy risk and the risk to profit ratio. We define a state dependent predictability measure which can be attributed to the deterministic and stochastic components of the price dynamics. The influence of predictability variations and especially of short term inefficiency structures on the optimal amount of investment is analyzed in the given context and a method for adaptation of a trading system to the proposed objective function is presented. Finally we show the performance of our trading strategy on the DAX and S&P 500 as examples for real world data using different types of prediction models in comparison.

  20. Real-time reservoir operation considering non-stationary inflow prediction

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Xu, W.; Cai, X.; Wang, Z.

    2011-12-01

    Stationarity of inflow has been a basic assumption for reservoir operation rule design, which is now facing challenges due to climate change and human interferences. This paper proposes a modeling framework to incorporate non-stationary inflow prediction for optimizing the hedging operation rule of large reservoirs with multiple-year flow regulation capacity. A multi-stage optimization model is formulated and a solution algorithm based on the optimality conditions is developed to incorporate non-stationary annual inflow prediction through a rolling, dynamic framework that updates the prediction from period to period and adopt the updated prediction in reservoir operation decision. The prediction model is ARIMA(4,1,0), in which parameter 4 stands for the order of autoregressive, 1 represents a linear trend, and 0 is the order of moving average. The modeling framework and solution algorithm is applied to the Miyun reservoir in China, determining a yearly operating schedule during the period from 1996 to 2009, during which there was a significant declining trend of reservoir inflow. Different operation policy scenarios are modeled, including standard operation policy (SOP, matching the current demand as much as possible), hedging rule (i.e., leaving a certain amount of water for future to avoid large risk of water deficit) with forecast from ARIMA (HR-1), hedging (HR) with perfect forecast (HR-2 ). Compared to the results of these scenarios to that of the actual reservoir operation (AO), the utility of the reservoir operation under HR-1 is 3.0% lower than HR-2, but 3.7% higher than the AO and 14.4% higher than SOP. Note that the utility under AO is 10.3% higher than that under SOP, which shows that a certain level of hedging under some inflow prediction or forecast was used in the real-world operation. Moreover, the impacts of discount rate and forecast uncertainty level on the operation will be discussed.

  1. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  2. Online optimal obstacle avoidance for rotary-wing autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kang, Keeryun

    This thesis presents an integrated framework for online obstacle avoidance of rotary-wing unmanned aerial vehicles (UAVs), which can provide UAVs an obstacle field navigation capability in a partially or completely unknown obstacle-rich environment. The framework is composed of a LIDAR interface, a local obstacle grid generation, a receding horizon (RH) trajectory optimizer, a global shortest path search algorithm, and a climb rate limit detection logic. The key feature of the framework is the use of an optimization-based trajectory generation in which the obstacle avoidance problem is formulated as a nonlinear trajectory optimization problem with state and input constraints over the finite range of the sensor. This local trajectory optimization is combined with a global path search algorithm which provides a useful initial guess to the nonlinear optimization solver. Optimization is the natural process of finding the best trajectory that is dynamically feasible, safe within the vehicle's flight envelope, and collision-free at the same time. The optimal trajectory is continuously updated in real time by the numerical optimization solver, Nonlinear Trajectory Generation (NTG), which is a direct solver based on the spline approximation of trajectory for dynamically flat systems. In fact, the overall approach of this thesis to finding the optimal trajectory is similar to the model predictive control (MPC) or the receding horizon control (RHC), except that this thesis followed a two-layer design; thus, the optimal solution works as a guidance command to be followed by the controller of the vehicle. The framework is implemented in a real-time simulation environment, the Georgia Tech UAV Simulation Tool (GUST), and integrated in the onboard software of the rotary-wing UAV test-bed at Georgia Tech. Initially, the 2D vertical avoidance capability of real obstacles was tested in flight. The flight test evaluations were extended to the benchmark tests for 3D avoidance capability over the virtual obstacles, and finally it was demonstrated on real obstacles located at the McKenna MOUT site in Fort Benning, Georgia. Simulations and flight test evaluations demonstrate the feasibility of the developed framework for UAV applications involving low-altitude flight in an urban area.

  3. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    NASA Astrophysics Data System (ADS)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  4. Conflict Resolution for Wind-Optimal Aircraft Trajectories in North Atlantic Oceanic Airspace with Wind Uncertainties

    NASA Technical Reports Server (NTRS)

    Rodionova, Olga; Sridhar, Banavar; Ng, Hok K.

    2016-01-01

    Air traffic in the North Atlantic oceanic airspace (NAT) experiences very strong winds caused by jet streams. Flying wind-optimal trajectories increases individual flight efficiency, which is advantageous when operating in the NAT. However, as the NAT is highly congested during peak hours, a large number of potential conflicts between flights are detected for the sets of wind-optimal trajectories. Conflict resolution performed at the strategic level of flight planning can significantly reduce the airspace congestion. However, being completed far in advance, strategic planning can only use predicted environmental conditions that may significantly differ from the real conditions experienced further by aircraft. The forecast uncertainties result in uncertainties in conflict prediction, and thus, conflict resolution becomes less efficient. This work considers wind uncertainties in order to improve the robustness of conflict resolution in the NAT. First, the influence of wind uncertainties on conflict prediction is investigated. Then, conflict resolution methods accounting for wind uncertainties are proposed.

  5. Multidisciplinary design optimization for sonic boom mitigation

    NASA Astrophysics Data System (ADS)

    Ozcer, Isik A.

    Automated, parallelized, time-efficient surface definition and grid generation and flow simulation methods are developed for sharp and accurate sonic boom signal computation in three dimensions in the near and mid-field of an aircraft using Euler/Full-Potential unstructured/structured computational fluid dynamics. The full-potential mid-field sonic boom prediction code is an accurate and efficient solver featuring automated grid generation, grid adaptation and shock fitting, and parallel processing. This program quickly marches the solution using a single nonlinear equation for large distances that cannot be covered with Euler solvers due to large memory and long computational time requirements. The solver takes into account variations in temperature and pressure with altitude. The far-field signal prediction is handled using the classical linear Thomas Waveform Parameter Method where the switching altitude from the nonlinear to linear prediction is determined by convergence of the ground signal pressure impulse value. This altitude is determined as r/L ≈ 10 from the source for a simple lifting wing, and r/L ≈ 40 for a real complex aircraft. Unstructured grid adaptation and shock fitting methodology developed for the near-field analysis employs an Hessian based anisotropic grid adaptation based on error equidistribution. A special field scalar is formulated to be used in the computation of the Hessian based error metric which enhances significantly the adaptation scheme for shocks. The entire cross-flow of a complex aircraft is resolved with high fidelity using only 500,000 grid nodes after only about 10 solution/adaptation cycles. Shock fitting is accomplished using Roe's Flux-Difference Splitting scheme which is an approximate Riemann type solver and by proper alignment of the cell faces with respect to shock surfaces. Simple to complex real aircraft geometries are handled with no user-interference required making the simulation methods suitable tools for product design. The simulation tools are used to optimize three geometries for sonic boom mitigation. The first is a simple axisymmetric shape to be used as a generic nose component, the second is a delta wing with lift, and the third is a real aircraft with nose and wing optimization. The objectives are to minimize the pressure impulse or the peak pressure in the sonic boom signal, while keeping the drag penalty under feasible limits. The design parameters for the meridian profile of the nose shape are the lengths and the half-cone angles of the linear segments that make up the profile. The design parameters for the lifting wing are the dihedral angle, angle of attack, non-linear span-wise twist and camber distribution. The test-bed aircraft is the modified F-5E aircraft built by Northrop Grumman, designated the Shaped Sonic Boom Demonstrator. This aircraft is fitted with an optimized axisymmetric nose, and the wings are optimized to demonstrate optimization for sonic boom mitigation for a real aircraft. The final results predict 42% reduction in bow shock strength, 17% reduction in peak Deltap, 22% reduction in pressure impulse, 10% reduction in foot print size, 24% reduction in inviscid drag, and no loss in lift for the optimized aircraft. Optimization is carried out using response surface methodology, and the design matrices are determined using standard DoE techniques for quadratic response modeling.

  6. Selection of Hidden Layer Neurons and Best Training Method for FFNN in Application of Long Term Load Forecasting

    NASA Astrophysics Data System (ADS)

    Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj

    2012-05-01

    For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning etc. A new technique for long term load forecasting (LTLF) using optimized feed forward artificial neural network (FFNN) architecture is presented in this paper, which selects optimal number of neurons in the hidden layer as well as the best training method for the case study. The prediction performance of proposed technique is evaluated using mean absolute percentage error (MAPE) of Thailand private electricity consumption and forecasted data. The results obtained are compared with the results of classical auto-regressive (AR) and moving average (MA) methods. It is, in general, observed that the proposed method is prediction wise more accurate.

  7. Antireflective coatings for multijunction solar cells under wide-angle ray bundles.

    PubMed

    Victoria, Marta; Domínguez, César; Antón, Ignacio; Sala, Gabriel

    2012-03-26

    Two important aspects must be considered when optimizing antireflection coatings (ARCs) for multijunction solar cells to be used in concentrators: the angular light distribution over the cell created by the particular concentration system and the wide spectral bandwidth the solar cell is sensitive to. In this article, a numerical optimization procedure and its results are presented. The potential efficiency enhancement by means of ARC optimization is calculated for several concentrating PV systems. In addition, two methods for ARCs direct characterization are presented. The results of these show that real ARCs slightly underperform theoretical predictions.

  8. PSO-Assisted Development of New Transferable Coarse-Grained Water Models.

    PubMed

    Bejagam, Karteek K; Singh, Samrendra; An, Yaxin; Berry, Carter; Deshmukh, Sanket A

    2018-02-15

    We have employed two-to-one mapping scheme to develop three coarse-grained (CG) water models, namely, 1-, 2-, and 3-site CG models. Here, for the first time, particle swarm optimization (PSO) and gradient descent methods were coupled to optimize the force-field parameters of the CG models to reproduce the density, self-diffusion coefficient, and dielectric constant of real water at 300 K. The CG MD simulations of these new models conducted with various timesteps, for different system sizes, and at a range of different temperatures are able to predict the density, self-diffusion coefficient, dielectric constant, surface tension, heat of vaporization, hydration free energy, and isothermal compressibility of real water with excellent accuracy. The 1-site model is ∼3 and ∼4.5 times computationally more efficient than 2- and 3-site models, respectively. To utilize the speed of 1-site model and electrostatic interactions offered by 2- and 3-site models, CG MD simulations of 1:1 combination of 1- and 2-/3-site models were performed at 300 K. These mixture simulations could also predict the properties of real water with good accuracy. Two new CG models of benzene, consisting of beads with and without partial charges, were developed. All three water models showed good capacity to solvate these benzene models.

  9. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  10. Brain-computer interface analysis of a dynamic visuo-motor task.

    PubMed

    Logar, Vito; Belič, Aleš

    2011-01-01

    The area of brain-computer interfaces (BCIs) represents one of the more interesting fields in neurophysiological research, since it investigates the development of the machines that perform different transformations of the brain's "thoughts" to certain pre-defined actions. Experimental studies have reported some successful implementations of BCIs; however, much of the field still remains unexplored. According to some recent reports the phase coding of informational content is an important mechanism in the brain's function and cognition, and has the potential to explain various mechanisms of the brain's data transfer, but it has yet to be scrutinized in the context of brain-computer interface. Therefore, if the mechanism of phase coding is plausible, one should be able to extract the phase-coded content, carried by brain signals, using appropriate signal-processing methods. In our previous studies we have shown that by using a phase-demodulation-based signal-processing approach it is possible to decode some relevant information on the current motor action in the brain from electroencephalographic (EEG) data. In this paper the authors would like to present a continuation of their previous work on the brain-information-decoding analysis of visuo-motor (VM) tasks. The present study shows that EEG data measured during more complex, dynamic visuo-motor (dVM) tasks carries enough information about the currently performed motor action to be successfully extracted by using the appropriate signal-processing and identification methods. The aim of this paper is therefore to present a mathematical model, which by means of the EEG measurements as its inputs predicts the course of the wrist movements as applied by each subject during the task in simulated or real time (BCI analysis). However, several modifications to the existing methodology are needed to achieve optimal decoding results and a real-time, data-processing ability. The information extracted from the EEG could, therefore, be further used for the development of a closed-loop, non-invasive, brain-computer interface. For the case of this study two types of measurements were performed, i.e., the electroencephalographic (EEG) signals and the wrist movements were measured simultaneously, during the subject's performance of a dynamic visuo-motor task. Wrist-movement predictions were computed by using the EEG data-processing methodology of double brain-rhythm filtering, double phase demodulation and double principal component analyses (PCA), each with a separate set of parameters. For the movement-prediction model a fuzzy inference system was used. The results have shown that the EEG signals measured during the dVM tasks carry enough information about the subjects' wrist movements for them to be successfully decoded using the presented methodology. Reasonably high values of the correlation coefficients suggest that the validation of the proposed approach is satisfactory. Moreover, since the causality of the rhythm filtering and the PCA transformation has been achieved, we have shown that these methods can also be used in a real-time, brain-computer interface. The study revealed that using non-causal, optimized methods yields better prediction results in comparison with the causal, non-optimized methodology; however, taking into account that the causality of these methods allows real-time processing, the minor decrease in prediction quality is acceptable. The study suggests that the methodology that was proposed in our previous studies is also valid for identifying the EEG-coded content during dVM tasks, albeit with various modifications, which allow better prediction results and real-time data processing. The results have shown that wrist movements can be predicted in simulated or real time; however, the results of the non-causal, optimized methodology (simulated) are slightly better. Nevertheless, the study has revealed that these methods should be suitable for use in the development of a non-invasive, brain-computer interface. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. The trade-off between morphology and control in the co-optimized design of robots.

    PubMed

    Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.

  12. Sequential quantum cloning under real-life conditions

    NASA Astrophysics Data System (ADS)

    Saberi, Hamed; Mardoukhi, Yousof

    2012-05-01

    We consider a sequential implementation of the optimal quantum cloning machine of Gisin and Massar and propose optimization protocols for experimental realization of such a quantum cloner subject to the real-life restrictions. We demonstrate how exploiting the matrix-product state (MPS) formalism and the ensuing variational optimization techniques reveals the intriguing algebraic structure of the Gisin-Massar output of the cloning procedure and brings about significant improvements to the optimality of the sequential cloning prescription of Delgado [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.98.150502 98, 150502 (2007)]. Our numerical results show that the orthodox paradigm of optimal quantum cloning can in practice be realized in a much more economical manner by utilizing a considerably lesser amount of informational and numerical resources than hitherto estimated. Instead of the previously predicted linear scaling of the required ancilla dimension D with the number of qubits n, our recipe allows a realization of such a sequential cloning setup with an experimentally manageable ancilla of dimension at most D=3 up to n=15 qubits. We also address satisfactorily the possibility of providing an optimal range of sequential ancilla-qubit interactions for optimal cloning of arbitrary states under realistic experimental circumstances when only a restricted class of such bipartite interactions can be engineered in practice.

  13. The trade-off between morphology and control in the co-optimized design of robots

    PubMed Central

    Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482

  14. Visual anticipation biases conscious decision making but not bottom-up visual processing

    PubMed Central

    Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F. M. J.

    2015-01-01

    Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself. PMID:25741290

  15. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri A; Shi, Ying; Christensen, Dane T

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modelingmore » approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.« less

  16. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raszmann, Emma; Baker, Kyri; Shi, Ying

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modelingmore » approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.« less

  17. Numerical simulation of the casting process of titanium tooth crowns and bridges.

    PubMed

    Wu, M; Augthun, M; Wagner, I; Sahm, P R; Spiekermann, H

    2001-06-01

    The objectives of this paper were to simulate the casting process of titanium tooth crowns and bridges; to predict and control porosity defect. A casting simulation software, MAGMASOFT, was used. The geometry of the crowns with fine details of the occlusal surface were digitized by means of laser measuring technique, then converted and read in the simulation software. Both mold filling and solidification were simulated, the shrinkage porosity was predicted by a "feeding criterion", and the gas pore sensitivity was studied based on the mold filling and solidification simulations. Two types of dental prostheses (a single-crown casting and a three-unit-bridge) with various sprue designs were numerically "poured", and only one optimal design for each prosthesis was recommended for real casting trial. With the numerically optimized design, real titanium dental prostheses (five replicas for each) were made on a centrifugal casting machine. All the castings endured radiographic examination, and no porosity was detected in the cast prostheses. It indicates that the numerical simulation is an efficient tool for dental casting design and porosity control. Copyright 2001 Kluwer Academic Publishers

  18. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  19. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  20. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  1. Real-time evaluation of polyphenol oxidase (PPO) activity in lychee pericarp based on weighted combination of spectral data and image features as determined by fuzzy neural network.

    PubMed

    Yang, Yi-Chao; Sun, Da-Wen; Wang, Nan-Nan; Xie, Anguo

    2015-07-01

    A novel method of using hyperspectral imaging technique with the weighted combination of spectral data and image features by fuzzy neural network (FNN) was proposed for real-time prediction of polyphenol oxidase (PPO) activity in lychee pericarp. Lychee images were obtained by a hyperspectral reflectance imaging system operating in the range of 400-1000nm. A support vector machine-recursive feature elimination (SVM-RFE) algorithm was applied to eliminating variables with no or little information for the prediction from all bands, resulting in a reduced set of optimal wavelengths. Spectral information at the optimal wavelengths and image color features were then used respectively to develop calibration models for the prediction of PPO in pericarp during storage, and the results of two models were compared. In order to improve the prediction accuracy, a decision strategy was developed based on weighted combination of spectral data and image features, in which the weights were determined by FNN for a better estimation of PPO activity. The results showed that the combined decision model was the best among all of the calibration models, with high R(2) values of 0.9117 and 0.9072 and low RMSEs of 0.45% and 0.459% for calibration and prediction, respectively. These results demonstrate that the proposed weighted combined decision method has great potential for improving model performance. The proposed technique could be used for a better prediction of other internal and external quality attributes of fruits. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Continuous piecewise-linear, reduced-order electrochemical model for lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid

    2017-02-01

    Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.

  3. A Real-Time Brain-Machine Interface Combining Motor Target and Trajectory Intent Using an Optimal Feedback Control Design

    PubMed Central

    Shanechi, Maryam M.; Williams, Ziv M.; Wornell, Gregory W.; Hu, Rollin C.; Powers, Marissa; Brown, Emery N.

    2013-01-01

    Real-time brain-machine interfaces (BMI) have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system. PMID:23593130

  4. Analytical design and evaluation of an active control system for helicopter vibration reduction and gust response alleviation

    NASA Technical Reports Server (NTRS)

    Taylor, R. B.; Zwicke, P. E.; Gold, P.; Miao, W.

    1980-01-01

    An analytical study was conducted to define the basic configuration of an active control system for helicopter vibration and gust response alleviation. The study culminated in a control system design which has two separate systems: narrow band loop for vibration reduction and wider band loop for gust response alleviation. The narrow band vibration loop utilizes the standard swashplate control configuration to input controller for the vibration loop is based on adaptive optimal control theory and is designed to adapt to any flight condition including maneuvers and transients. The prime characteristics of the vibration control system is its real time capability. The gust alleviation control system studied consists of optimal sampled data feedback gains together with an optimal one-step-ahead prediction. The prediction permits the estimation of the gust disturbance which can then be used to minimize the gust effects on the helicopter.

  5. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions

    PubMed Central

    Schendan, Haline E.; Ganis, Giorgio

    2015-01-01

    People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition. PMID:26441701

  6. Optimization-based power management of hybrid power systems with applications in advanced hybrid electric vehicles and wind farms with battery storage

    NASA Astrophysics Data System (ADS)

    Borhan, Hoseinali

    Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, "brain" of these "hybrid" systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with "else-then-if" logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory-constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. ^ The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC).

  7. Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.

  8. Real-Time Optimization and Control of Next-Generation Distribution

    Science.gov Websites

    Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation

  9. MO-G-18C-05: Real-Time Prediction in Free-Breathing Perfusion MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, H; Liu, W; Ruan, D

    Purpose: The aim is to minimize frame-wise difference errors caused by respiratory motion and eliminate the need for breath-holds in magnetic resonance imaging (MRI) sequences with long acquisitions and repeat times (TRs). The technique is being applied to perfusion MRI using arterial spin labeling (ASL). Methods: Respiratory motion prediction (RMP) using navigator echoes was implemented in ASL. A least-square method was used to extract the respiratory motion information from the 1D navigator. A generalized artificial neutral network (ANN) with three layers was developed to simultaneously predict 10 time points forward in time and correct for respiratory motion during MRI acquisition.more » During the training phase, the parameters of the ANN were optimized to minimize the aggregated prediction error based on acquired navigator data. During realtime prediction, the trained ANN was applied to the most recent estimated displacement trajectory to determine in real-time the amount of spatial Results: The respiratory motion information extracted from the least-square method can accurately represent the navigator profiles, with a normalized chi-square value of 0.037±0.015 across the training phase. During the 60-second training phase, the ANN successfully learned the respiratory motion pattern from the navigator training data. During real-time prediction, the ANN received displacement estimates and predicted the motion in the continuum of a 1.0 s prediction window. The ANN prediction was able to provide corrections for different respiratory states (i.e., inhalation/exhalation) during real-time scanning with a mean absolute error of < 1.8 mm. Conclusion: A new technique enabling free-breathing acquisition during MRI is being developed. A generalized ANN development has demonstrated its efficacy in predicting a continuum of motion profile for volumetric imaging based on navigator inputs. Future work will enhance the robustness of ANN and verify its effectiveness with human subjects. Research supported by National Institutes of Health National Cancer Institute Grant R01 CA159471-01.« less

  10. An Arrival and Departure Time Predictor for Scheduling Communication in Opportunistic IoT

    PubMed Central

    Pozza, Riccardo; Georgoulas, Stylianos; Moessner, Klaus; Nati, Michele; Gluhak, Alexander; Krco, Srdjan

    2016-01-01

    In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process. PMID:27827909

  11. An Arrival and Departure Time Predictor for Scheduling Communication in Opportunistic IoT.

    PubMed

    Pozza, Riccardo; Georgoulas, Stylianos; Moessner, Klaus; Nati, Michele; Gluhak, Alexander; Krco, Srdjan

    2016-11-04

    In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process.

  12. Real time groove characterization combining partial least squares and SVR strategies: application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.

    2017-10-01

    A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  14. Actinobacteria consortium as an efficient biotechnological tool for mixed polluted soil reclamation: Experimental factorial design for bioremediation process optimization.

    PubMed

    Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra

    2018-01-15

    The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg -1 ), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah

    PubMed Central

    Tian, Le; Latré, Steven

    2017-01-01

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks. PMID:28677617

  16. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah.

    PubMed

    Tian, Le; Khorov, Evgeny; Latré, Steven; Famaey, Jeroen

    2017-07-04

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.

  17. Drug delivery optimization through Bayesian networks.

    PubMed Central

    Bellazzi, R.

    1992-01-01

    This paper describes how Bayesian Networks can be used in combination with compartmental models to plan Recombinant Human Erythropoietin (r-HuEPO) delivery in the treatment of anemia of chronic uremic patients. Past measurements of hematocrit or hemoglobin concentration in a patient during the therapy can be exploited to adjust the parameters of a compartmental model of the erythropoiesis. This adaptive process allows more accurate patient-specific predictions, and hence a more rational dosage planning. We describe a drug delivery optimization protocol, based on our approach. Some results obtained on real data are presented. PMID:1482938

  18. Control and Optimization of Electric Ship Propulsion Systems with Hybrid Energy Storage

    NASA Astrophysics Data System (ADS)

    Hou, Jun

    Electric ships experience large propulsion-load fluctuations on their drive shaft due to encountered waves and the rotational motion of the propeller, affecting the reliability of the shipboard power network and causing wear and tear. This dissertation explores new solutions to address these fluctuations by integrating a hybrid energy storage system (HESS) and developing energy management strategies (EMS). Advanced electric propulsion drive concepts are developed to improve energy efficiency, performance and system reliability by integrating HESS, developing advanced control solutions and system integration strategies, and creating tools (including models and testbed) for design and optimization of hybrid electric drive systems. A ship dynamics model which captures the underlying physical behavior of the electric ship propulsion system is developed to support control development and system optimization. To evaluate the effectiveness of the proposed control approaches, a state-of-the-art testbed has been constructed which includes a system controller, Li-Ion battery and ultra-capacitor (UC) modules, a high-speed flywheel, electric motors with their power electronic drives, DC/DC converters, and rectifiers. The feasibility and effectiveness of HESS are investigated and analyzed. Two different HESS configurations, namely battery/UC (B/UC) and battery/flywheel (B/FW), are studied and analyzed to provide insights into the advantages and limitations of each configuration. Battery usage, loss analysis, and sensitivity to battery aging are also analyzed for each configuration. In order to enable real-time application and achieve desired performance, a model predictive control (MPC) approach is developed, where a state of charge (SOC) reference of flywheel for B/FW or UC for B/UC is used to address the limitations imposed by short predictive horizons, because the benefits of flywheel and UC working around high-efficiency range are ignored by short predictive horizons. Given the multi-frequency characteristics of load fluctuations, a filter-based control strategy is developed to illustrate the importance of the coordination within the HESS. Without proper control strategies, the HESS solution could be worse than a single energy storage system solution. The proposed HESS, when introduced into an existing shipboard electrical propulsion system, will interact with the power generation systems. A model-based analysis is performed to evaluate the interactions of the multiple power sources when a hybrid energy storage system is introduced. The study has revealed undesirable interactions when the controls are not coordinated properly, and leads to the conclusion that a proper EMS is needed. Knowledge of the propulsion-load torque is essential for the proposed system-level EMS, but this load torque is immeasurable in most marine applications. To address this issue, a model-based approach is developed so that load torque estimation and prediction can be incorporated into the MPC. In order to evaluate the effectiveness of the proposed approach, an input observer with linear prediction is developed as an alternative approach to obtain the load estimation and prediction. Comparative studies are performed to illustrate the importance of load torque estimation and prediction, and demonstrate the effectiveness of the proposed approach in terms of improved efficiency, enhanced reliability, and reduced wear and tear. Finally, the real-time MPC algorithm has been implemented on a physical testbed. Three different efforts have been made to enable real-time implementation: a specially tailored problem formulation, an efficient optimization algorithm and a multi-core hardware implementation. Compared to the filter-based strategy, the proposed real-time MPC achieves superior performance, in terms of the enhanced system reliability, improved HESS efficiency, and extended battery life.

  19. Effect of experimental design on the prediction performance of calibration models based on near-infrared spectroscopy for pharmaceutical applications.

    PubMed

    Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2012-12-01

    Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).

  20. Data assimialation for real-time prediction and reanalysis

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Kellerman, A. C.; Podladchikova, T.; Kondrashov, D. A.; Ghil, M.

    2015-12-01

    We discuss the how data assimilation can be used for the analysis of individual satellite anomalies, development of long-term evolution reconstruction that can be used for the specification models, and use of data assimilation to improve the now-casting and focusing of the radiation belts. We also discuss advanced data assimilation methods such as parameter estimation and smoothing.The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. Real-time prediction framework operating on our web site based on GOES, RBSP A, B and ACE data and 3D VERB is presented and discussed. In this paper we present a number of application of the data assimilation with the VERB 3D code. 1) Model with data assimilation allows to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics based VERB code in an optimal way. We illustrate how we use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore the model is as good as the initial conditions that it uses. To produce the best possible initial condition data from different sources ( GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation as described above. The resulting initial condition does not have gaps. That allows us to make a more accurate predictions.

  1. Predicting Power Output of Upper Body using the OMNI-RES Scale.

    PubMed

    Bautista, Iker J; Chirosa, Ignacio J; Tamayo, Ignacio Martín; González, Andrés; Robinson, Joseph E; Chirosa, Luis J; Robertson, Robert J

    2014-12-09

    The main aim of this study was to determine the optimal training zone for maximum power output. This was to be achieved through estimating mean bar velocity of the concentric phase of a bench press using a prediction equation. The values for the prediction equation would be obtained using OMNI-RES scale values of different loads of the bench press exercise. Sixty males (age 23.61 2.81 year; body height 176.29 6.73 cm; body mass 73.28 4.75 kg) voluntarily participated in the study and were tested using an incremental protocol on a Smith machine to determine one repetition maximum (1RM) in the bench press exercise. A linear regression analysis produced a strong correlation (r = -0.94) between rating of perceived exertion (RPE) and mean bar velocity (Velmean). The Pearson correlation analysis between real power output (PotReal) and estimated power (PotEst) showed a strong correlation coefficient of r = 0.77, significant at a level of p = 0.01. Therefore, the OMNI-RES scale can be used to predict Velmean in the bench press exercise to control the intensity of the exercise. The positive relationship between PotReal and PotEst allowed for the identification of a maximum power-training zone.

  2. Predicting Power Output of Upper Body using the OMNI-RES Scale

    PubMed Central

    Bautista, Iker J.; Chirosa, Ignacio J.; Tamayo, Ignacio Martín; González, Andrés; Robinson, Joseph E.; Chirosa, Luis J.; Robertson, Robert J.

    2014-01-01

    The main aim of this study was to determine the optimal training zone for maximum power output. This was to be achieved through estimating mean bar velocity of the concentric phase of a bench press using a prediction equation. The values for the prediction equation would be obtained using OMNI–RES scale values of different loads of the bench press exercise. Sixty males (age 23.61 2.81 year; body height 176.29 6.73 cm; body mass 73.28 4.75 kg) voluntarily participated in the study and were tested using an incremental protocol on a Smith machine to determine one repetition maximum (1RM) in the bench press exercise. A linear regression analysis produced a strong correlation (r = −0.94) between rating of perceived exertion (RPE) and mean bar velocity (Velmean). The Pearson correlation analysis between real power output (PotReal) and estimated power (PotEst) showed a strong correlation coefficient of r = 0.77, significant at a level of p = 0.01. Therefore, the OMNI–RES scale can be used to predict Velmean in the bench press exercise to control the intensity of the exercise. The positive relationship between PotReal and PotEst allowed for the identification of a maximum power-training zone. PMID:25713677

  3. RBSURFpred: Modeling protein accessible surface area in real and binary space using regularized and optimized regression.

    PubMed

    Tarafder, Sumit; Toukir Ahmed, Md; Iqbal, Sumaiya; Tamjidul Hoque, Md; Sohel Rahman, M

    2018-03-14

    Accessible surface area (ASA) of a protein residue is an effective feature for protein structure prediction, binding region identification, fold recognition problems etc. Improving the prediction of ASA by the application of effective feature variables is a challenging but explorable task to consider, specially in the field of machine learning. Among the existing predictors of ASA, REGAd 3 p is a highly accurate ASA predictor which is based on regularized exact regression with polynomial kernel of degree 3. In this work, we present a new predictor RBSURFpred, which extends REGAd 3 p on several dimensions by incorporating 58 physicochemical, evolutionary and structural properties into 9-tuple peptides via Chou's general PseAAC, which allowed us to obtain higher accuracies in predicting both real-valued and binary ASA. We have compared RBSURFpred for both real and binary space predictions with state-of-the-art predictors, such as REGAd 3 p and SPIDER2. We also have carried out a rigorous analysis of the performance of RBSURFpred in terms of different amino acids and their properties, and also with biologically relevant case-studies. The performance of RBSURFpred establishes itself as a useful tool for the community. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Architecture and Performance Predictions

    NASA Technical Reports Server (NTRS)

    Schaefer, Jacob; Brown, Nelson

    2013-01-01

    A peak-seeking control approach for real-time trim configuration optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control approach is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an FA-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are controlled for optimization of fuel flow. This presentation presents the design and integration of this peak-seeking controller on a modified NASA FA-18 airplane with research flight control computers. A research flight was performed to collect data to build a realistic model of the performance function and characterize measurement noise. This model was then implemented into a nonlinear six-degree-of-freedom FA-18 simulation along with the peak-seeking control algorithm. With the goal of eventual flight tests, the algorithm was first evaluated in the improved simulation environment. Results from the simulation predict good convergence on minimum fuel flow with a 2.5-percent reduction in fuel flow relative to the baseline trim of the aircraft.

  5. Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Architecture and Performance Predictions

    NASA Technical Reports Server (NTRS)

    Schaefer, Jacob; Brown, Nelson A.

    2013-01-01

    A peak-seeking control approach for real-time trim configuration optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control approach is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are controlled for optimization of fuel flow. This paper presents the design and integration of this peak-seeking controller on a modified NASA F/A-18 airplane with research flight control computers. A research flight was performed to collect data to build a realistic model of the performance function and characterize measurement noise. This model was then implemented into a nonlinear six-degree-of-freedom F/A-18 simulation along with the peak-seeking control algorithm. With the goal of eventual flight tests, the algorithm was first evaluated in the improved simulation environment. Results from the simulation predict good convergence on minimum fuel flow with a 2.5-percent reduction in fuel flow relative to the baseline trim of the aircraft.

  6. Real-time identification of indoor pollutant source positions based on neural network locator of contaminant sources and optimized sensor networks.

    PubMed

    Vukovic, Vladimir; Tabares-Velasco, Paulo Cesar; Srebric, Jelena

    2010-09-01

    A growing interest in security and occupant exposure to contaminants revealed a need for fast and reliable identification of contaminant sources during incidental situations. To determine potential contaminant source positions in outdoor environments, current state-of-the-art modeling methods use computational fluid dynamic simulations on parallel processors. In indoor environments, current tools match accidental contaminant distributions with cases from precomputed databases of possible concentration distributions. These methods require intensive computations in pre- and postprocessing. On the other hand, neural networks emerged as a tool for rapid concentration forecasting of outdoor environmental contaminants such as nitrogen oxides or sulfur dioxide. All of these modeling methods depend on the type of sensors used for real-time measurements of contaminant concentrations. A review of the existing sensor technologies revealed that no perfect sensor exists, but intensity of work in this area provides promising results in the near future. The main goal of the presented research study was to extend neural network modeling from the outdoor to the indoor identification of source positions, making this technology applicable to building indoor environments. The developed neural network Locator of Contaminant Sources was also used to optimize number and allocation of contaminant concentration sensors for real-time prediction of indoor contaminant source positions. Such prediction should take place within seconds after receiving real-time contaminant concentration sensor data. For the purpose of neural network training, a multizone program provided distributions of contaminant concentrations for known source positions throughout a test building. Trained networks had an output indicating contaminant source positions based on measured concentrations in different building zones. A validation case based on a real building layout and experimental data demonstrated the ability of this method to identify contaminant source positions. Future research intentions are focused on integration with real sensor networks and model improvements for much more complicated contamination scenarios.

  7. Real-time position reconstruction with hippocampal place cells.

    PubMed

    Guger, Christoph; Gener, Thomas; Pennartz, Cyriel M A; Brotons-Mas, Jorge R; Edlinger, Günter; Bermúdez I Badia, S; Verschure, Paul; Schaffelhofer, Stefan; Sanchez-Vives, Maria V

    2011-01-01

    Brain-computer interfaces (BCI) are using the electroencephalogram, the electrocorticogram and trains of action potentials as inputs to analyze brain activity for communication purposes and/or the control of external devices. Thus far it is not known whether a BCI system can be developed that utilizes the states of brain structures that are situated well below the cortical surface, such as the hippocampus. In order to address this question we used the activity of hippocampal place cells (PCs) to predict the position of an rodent in real-time. First, spike activity was recorded from the hippocampus during foraging and analyzed off-line to optimize the spike sorting and position reconstruction algorithm of rats. Then the spike activity was recorded and analyzed in real-time. The rat was running in a box of 80 cm × 80 cm and its locomotor movement was captured with a video tracking system. Data were acquired to calculate the rat's trajectories and to identify place fields. Then a Bayesian classifier was trained to predict the position of the rat given its neural activity. This information was used in subsequent trials to predict the rat's position in real-time. The real-time experiments were successfully performed and yielded an error between 12.2 and 17.4% using 5-6 neurons. It must be noted here that the encoding step was done with data recorded before the real-time experiment and comparable accuracies between off-line (mean error of 15.9% for three rats) and real-time experiments (mean error of 14.7%) were achieved. The experiment shows proof of principle that position reconstruction can be done in real-time, that PCs were stable and spike sorting was robust enough to generalize from the training run to the real-time reconstruction phase of the experiment. Real-time reconstruction may be used for a variety of purposes, including creating behavioral-neuronal feedback loops or for implementing neuroprosthetic control.

  8. Real-Time Position Reconstruction with Hippocampal Place Cells

    PubMed Central

    Guger, Christoph; Gener, Thomas; Pennartz, Cyriel M. A.; Brotons-Mas, Jorge R.; Edlinger, Günter; Bermúdez i Badia, S.; Verschure, Paul; Schaffelhofer, Stefan; Sanchez-Vives, Maria V.

    2011-01-01

    Brain–computer interfaces (BCI) are using the electroencephalogram, the electrocorticogram and trains of action potentials as inputs to analyze brain activity for communication purposes and/or the control of external devices. Thus far it is not known whether a BCI system can be developed that utilizes the states of brain structures that are situated well below the cortical surface, such as the hippocampus. In order to address this question we used the activity of hippocampal place cells (PCs) to predict the position of an rodent in real-time. First, spike activity was recorded from the hippocampus during foraging and analyzed off-line to optimize the spike sorting and position reconstruction algorithm of rats. Then the spike activity was recorded and analyzed in real-time. The rat was running in a box of 80 cm × 80 cm and its locomotor movement was captured with a video tracking system. Data were acquired to calculate the rat's trajectories and to identify place fields. Then a Bayesian classifier was trained to predict the position of the rat given its neural activity. This information was used in subsequent trials to predict the rat's position in real-time. The real-time experiments were successfully performed and yielded an error between 12.2 and 17.4% using 5–6 neurons. It must be noted here that the encoding step was done with data recorded before the real-time experiment and comparable accuracies between off-line (mean error of 15.9% for three rats) and real-time experiments (mean error of 14.7%) were achieved. The experiment shows proof of principle that position reconstruction can be done in real-time, that PCs were stable and spike sorting was robust enough to generalize from the training run to the real-time reconstruction phase of the experiment. Real-time reconstruction may be used for a variety of purposes, including creating behavioral–neuronal feedback loops or for implementing neuroprosthetic control. PMID:21808603

  9. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  10. Modeling recombination processes and predicting energy conversion efficiency of dye sensitized solar cells from first principles

    NASA Astrophysics Data System (ADS)

    Ma, Wei; Meng, Sheng

    2014-03-01

    We present a set of algorithms based on solo first principles calculations, to accurately calculate key properties of a DSC device including sunlight harvest, electron injection, electron-hole recombination, and open circuit voltages. Two series of D- π-A dyes are adopted as sample dyes. The short circuit current can be predicted by calculating the dyes' photo absorption, and the electron injection and recombination lifetime using real-time time-dependent density functional theory (TDDFT) simulations. Open circuit voltage can be reproduced by calculating energy difference between the quasi-Fermi level of electrons in the semiconductor and the electrolyte redox potential, considering the influence of electron recombination. Based on timescales obtained from real time TDDFT dynamics for excited states, the estimated power conversion efficiency of DSC fits nicely with the experiment, with deviation below 1-2%. Light harvesting efficiency, incident photon-to-electron conversion efficiency and the current-voltage characteristics can also be well reproduced. The predicted efficiency can serve as either an ideal limit for optimizing photovoltaic performance of a given dye, or a virtual device that closely mimicking the performance of a real device under different experimental settings.

  11. A general representation scheme for crystalline solids based on Voronoi-tessellation real feature values and atomic property data

    PubMed Central

    Jalem, Randy; Nakayama, Masanobu; Noda, Yusuke; Le, Tam; Takeuchi, Ichiro; Tateyama, Yoshitaka; Yamazaki, Hisatsugu

    2018-01-01

    Abstract Increasing attention has been paid to materials informatics approaches that promise efficient and fast discovery and optimization of functional inorganic materials. Technical breakthrough is urgently requested to advance this field and efforts have been made in the development of materials descriptors to encode or represent characteristics of crystalline solids, such as chemical composition, crystal structure, electronic structure, etc. We propose a general representation scheme for crystalline solids that lifts restrictions on atom ordering, cell periodicity, and system cell size based on structural descriptors of directly binned Voronoi-tessellation real feature values and atomic/chemical descriptors based on the electronegativity of elements in the crystal. Comparison was made vs. radial distribution function (RDF) feature vector, in terms of predictive accuracy on density functional theory (DFT) material properties: cohesive energy (CE), density (d), electronic band gap (BG), and decomposition energy (Ed). It was confirmed that the proposed feature vector from Voronoi real value binning generally outperforms the RDF-based one for the prediction of aforementioned properties. Together with electronegativity-based features, Voronoi-tessellation features from a given crystal structure that are derived from second-nearest neighbor information contribute significantly towards prediction. PMID:29707064

  12. A general representation scheme for crystalline solids based on Voronoi-tessellation real feature values and atomic property data.

    PubMed

    Jalem, Randy; Nakayama, Masanobu; Noda, Yusuke; Le, Tam; Takeuchi, Ichiro; Tateyama, Yoshitaka; Yamazaki, Hisatsugu

    2018-01-01

    Increasing attention has been paid to materials informatics approaches that promise efficient and fast discovery and optimization of functional inorganic materials. Technical breakthrough is urgently requested to advance this field and efforts have been made in the development of materials descriptors to encode or represent characteristics of crystalline solids, such as chemical composition, crystal structure, electronic structure, etc. We propose a general representation scheme for crystalline solids that lifts restrictions on atom ordering, cell periodicity, and system cell size based on structural descriptors of directly binned Voronoi-tessellation real feature values and atomic/chemical descriptors based on the electronegativity of elements in the crystal. Comparison was made vs. radial distribution function (RDF) feature vector, in terms of predictive accuracy on density functional theory (DFT) material properties: cohesive energy (CE), density ( d ), electronic band gap (BG), and decomposition energy (Ed). It was confirmed that the proposed feature vector from Voronoi real value binning generally outperforms the RDF-based one for the prediction of aforementioned properties. Together with electronegativity-based features, Voronoi-tessellation features from a given crystal structure that are derived from second-nearest neighbor information contribute significantly towards prediction.

  13. Using a water-food-energy nexus approach for optimal irrigation management during drought events in Nebraska

    NASA Astrophysics Data System (ADS)

    Campana, P. E.; Zhang, J.; Yao, T.; Melton, F. S.; Yan, J.

    2017-12-01

    Climate change and drought have severe impacts on the agricultural sector affecting crop yields, water availability, and energy consumption for irrigation. Monitoring, assessing and mitigating the effects of climate change and drought on the agricultural and energy sectors are fundamental challenges that require investigation for water, food, and energy security issues. Using an integrated water-food-energy nexus approach, this study is developing a comprehensive drought management system through integration of real-time drought monitoring with real-time irrigation management. The spatially explicit model developed, GIS-OptiCE, can be used for simulation, multi-criteria optimization and generation of forecasts to support irrigation management. To demonstrate the value of the approach, the model has been applied to one major corn region in Nebraska to study the effects of the 2012 drought on crop yield and irrigation water/energy requirements as compared to a wet year such as 2009. The water-food-energy interrelationships evaluated show that significant water volumes and energy are required to halt the negative effects of drought on the crop yield. The multi-criteria optimization problem applied in this study indicates that the optimal solutions of irrigation do not necessarily correspond to those that would produce the maximum crop yields, depending on both water and economic constraints. In particular, crop pricing forecasts are extremely important to define the optimal irrigation management strategy. The model developed shows great potential in precision agriculture by providing near real-time data products including information on evapotranspiration, irrigation volumes, energy requirements, predicted crop growth, and nutrient requirements.

  14. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the in-flight optimization of the lift distribution across the wing for minimum drag of an aircraft in formation flight. The usual elliptical distribution that is optimal for a given wing with a given span is no longer optimal for the trailing wing in a formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum combined induced and profile drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  15. The spectral basis of optimal error field correction on DIII-D

    DOE PAGES

    Paz-Soldan, Carlos A.; Buttery, Richard J.; Garofalo, Andrea M.; ...

    2014-04-28

    Here, experimental optimum error field correction (EFC) currents found in a wide breadth of dedicated experiments on DIII-D are shown to be consistent with the currents required to null the poloidal harmonics of the vacuum field which drive the kink mode near the plasma edge. This allows the identification of empirical metrics which predict optimal EFC currents with accuracy comparable to that of first- principles modeling which includes the ideal plasma response. While further metric refinements are desirable, this work suggests optimal EFC currents can be effectively fed-forward based purely on knowledge of the vacuum error field and basic equilibriummore » properties which are routinely calculated in real-time.« less

  16. Fusion of Optimized Indicators from Advanced Driver Assistance Systems (ADAS) for Driver Drowsiness Detection

    PubMed Central

    Daza, Iván G.; Bergasa, Luis M.; Bronte, Sebastián; Yebes, J. Javier; Almazán, Javier; Arroyo, Roberto

    2014-01-01

    This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study. PMID:24412904

  17. Automated Dynamic Demand Response Implementation on a Micro-grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Chelmis, Charalampos

    In this paper, we describe a system for real-time automated Dynamic and Sustainable Demand Response with sparse data consumption prediction implemented on the University of Southern California campus microgrid. Supply side approaches to resolving energy supply-load imbalance do not work at high levels of renewable energy penetration. Dynamic Demand Response (D 2R) is a widely used demand-side technique to dynamically adjust electricity consumption during peak load periods. Our D 2R system consists of accurate machine learning based energy consumption forecasting models that work with sparse data coupled with fast and sustainable load curtailment optimization algorithms that provide the ability tomore » dynamically adapt to changing supply-load imbalances in near real-time. Our Sustainable DR (SDR) algorithms attempt to distribute customer curtailment evenly across sub-intervals during a DR event and avoid expensive demand peaks during a few sub-intervals. It also ensures that each customer is penalized fairly in order to achieve the targeted curtailment. We develop near linear-time constant-factor approximation algorithms along with Polynomial Time Approximation Schemes (PTAS) for SDR curtailment that minimizes the curtailment error defined as the difference between the target and achieved curtailment values. Our SDR curtailment problem is formulated as an Integer Linear Program that optimally matches customers to curtailment strategies during a DR event while also explicitly accounting for customer strategy switching overhead as a constraint. We demonstrate the results of our D 2R system using real data from experiments performed on the USC smartgrid and show that 1) our prediction algorithms can very accurately predict energy consumption even with noisy or missing data and 2) our curtailment algorithms deliver DR with extremely low curtailment errors in the 0.01-0.05 kWh range.« less

  18. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  19. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution systemmore » operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  20. Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  1. [Research on optimal modeling strategy for licorice extraction process based on near-infrared spectroscopy technology].

    PubMed

    Wang, Hai-Xia; Suo, Tong-Chuan; Yu, He-Shui; Li, Zheng

    2016-10-01

    The manufacture of traditional Chinese medicine (TCM) products is always accompanied by processing complex raw materials and real-time monitoring of the manufacturing process. In this study, we investigated different modeling strategies for the extraction process of licorice. Near-infrared spectra associate with the extraction time was used to detemine the states of the extraction processes. Three modeling approaches, i.e., principal component analysis (PCA), partial least squares regression (PLSR) and parallel factor analysis-PLSR (PARAFAC-PLSR), were adopted for the prediction of the real-time status of the process. The overall results indicated that PCA, PLSR and PARAFAC-PLSR can effectively detect the errors in the extraction procedure and predict the process trajectories, which has important significance for the monitoring and controlling of the extraction processes. Copyright© by the Chinese Pharmaceutical Association.

  2. Real-time control of combined surface water quantity and quality: polder flushing.

    PubMed

    Xu, M; van Overloop, P J; van de Giesen, N C; Stelling, G S

    2010-01-01

    In open water systems, keeping both water depths and water quality at specified values is critical for maintaining a 'healthy' water system. Many systems still require manual operation, at least for water quality management. When applying real-time control, both quantity and quality standards need to be met. In this paper, an artificial polder flushing case is studied. Model Predictive Control (MPC) is developed to control the system. In addition to MPC, a 'forward estimation' procedure is used to acquire water quality predictions for the simplified model used in MPC optimization. In order to illustrate the advantages of MPC, classical control [Proportional-Integral control (PI)] has been developed for comparison in the test case. The results show that both algorithms are able to control the polder flushing process, but MPC is more efficient in functionality and control flexibility.

  3. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  4. Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction.

    PubMed

    Xu, Yonghui; Min, Huaqing; Wu, Qingyao; Song, Hengjie; Ye, Bicui

    2017-02-06

    Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods.

  5. RECOVERY ACT: DYNAMIC ENERGY CONSUMPTION MANAGEMENT OF ROUTING TELECOM AND DATA CENTERS THROUGH REAL-TIME OPTIMAL CONTROL (RTOC): Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ron Moon

    This final scientific report documents the Industrial Technology Program (ITP) Stage 2 Concept Development effort on Data Center Energy Reduction and Management Through Real-Time Optimal Control (RTOC). Society is becoming increasingly dependent on information technology systems, driving exponential growth in demand for data center processing and an insatiable appetite for energy. David Raths noted, 'A 50,000-square-foot data center uses approximately 4 megawatts of power, or the equivalent of 57 barrels of oil a day1.' The problem has become so severe that in some cases, users are giving up raw performance for a better balance between performance and energy efficiency. Historically,more » power systems for data centers were crudely sized to meet maximum demand. Since many servers operate at 60%-90% of maximum power while only utilizing an average of 5% to 15% of their capability, there are huge inefficiencies in the consumption and delivery of power in these data centers. The goal of the 'Recovery Act: Decreasing Data Center Energy Use through Network and Infrastructure Control' is to develop a state of the art approach for autonomously and intelligently reducing and managing data center power through real-time optimal control. Advances in microelectronics and software are enabling the opportunity to realize significant data center power savings through the implementation of autonomous power management control algorithms. The first step to realizing these savings was addressed in this study through the successful creation of a flexible and scalable mathematical model (equation) for data center behavior and the formulation of an acceptable low technical risk market introduction strategy leveraging commercial hardware and software familiar to the data center market. Follow-on Stage 3 Concept Development efforts include predictive modeling and simulation of algorithm performance, prototype demonstrations with representative data center equipment to verify requisite performance and continued commercial partnering agreement formation to ensure uninterrupted development, and deployment of the real-time optimal control algorithm. As a software implementable technique for reducing power consumption, the RTOC has two very desirable traits supporting rapid prototyping and ultimately widespread dissemination. First, very little capital is required for implementation. No major infrastructure modifications are required and there is no need to purchase expensive capital equipment. Second, the RTOC can be rolled out incrementally. Therefore, the effectiveness can be proven without a large scale initial roll out. Through the use of the Impact Projections Model provided by the DOE, monetary savings in excess of $100M in 2020 and billions by 2040 are predicted. In terms of energy savings, the model predicts a primary energy displacement of 260 trillion BTUs (33 trillion kWh), or a 50% reduction in server power consumption. The model also predicts a corresponding reduction of pollutants such as SO2 and NOx in excess of 100,000 metric tonnes assuming the RTOC is fully deployed. While additional development and prototyping is required to validate these predictions, the relative low cost and ease of implementation compared to large capital projects makes it an ideal candidate for further investigation.« less

  6. Integrating machine learning to achieve an automatic parameter prediction for practical continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Liu, Weiqi; Huang, Peng; Peng, Jinye; Fan, Jianping; Zeng, Guihua

    2018-02-01

    For supporting practical quantum key distribution (QKD), it is critical to stabilize the physical parameters of signals, e.g., the intensity, phase, and polarization of the laser signals, so that such QKD systems can achieve better performance and practical security. In this paper, an approach is developed by integrating a support vector regression (SVR) model to optimize the performance and practical security of the QKD system. First, a SVR model is learned to precisely predict the time-along evolutions of the physical parameters of signals. Second, such predicted time-along evolutions are employed as feedback to control the QKD system for achieving the optimal performance and practical security. Finally, our proposed approach is exemplified by using the intensity evolution of laser light and a local oscillator pulse in the Gaussian modulated coherent state QKD system. Our experimental results have demonstrated three significant benefits of our SVR-based approach: (1) it can allow the QKD system to achieve optimal performance and practical security, (2) it does not require any additional resources and any real-time monitoring module to support automatic prediction of the time-along evolutions of the physical parameters of signals, and (3) it is applicable to any measurable physical parameter of signals in the practical QKD system.

  7. Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.

    1982-04-01

    This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.

  8. A Decision Support System For The Real-Time Allocation Of The Water Resource Of The Tarim River Basin, China

    NASA Astrophysics Data System (ADS)

    Wei, J.; Wang, G.; Liu, R.

    2008-12-01

    The Tarim River Basin is the longest inland river in China. Due to water scarcity, ecologically-fragile is becoming a significant constraint to sustainable development in this region. To effectively manage the limited water resources for ecological purposes and for conventional water utilization purposes, a real-time water resources allocation Decision Support System (DSS) has been developed. Based on workflows of the water resources regulations and comprehensive analysis of the efficiency and feasibility of water management strategies, the DSS includes information systems that perform data acquisition, management and visualization, and model systems that perform hydrological forecast, water demand prediction, flow routing simulation and water resources optimization of the hydrological and water utilization process. An optimization and process control strategy is employed to dynamically allocate the water resources among the different stakeholders. The competitive targets and constraints are taken into considered by multi-objective optimization and with different priorities. The DSS of the Tarim River Basin has been developed and been successfully utilized to support the water resources management of the Tarim River Basin since 2005.

  9. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  10. Optimization of multi-environment trials for genomic selection based on crop models.

    PubMed

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  11. Optimizing the Anti-VEGF Treatment Strategy for Neovascular Age-Related Macular Degeneration: From Clinical Trials to Real-Life Requirements.

    PubMed

    Mantel, Irmela

    2015-06-01

    This Perspective discusses the pertinence of variable dosing regimens with anti-vascular endothelial growth factor (VEGF) for neovascular age-related macular degeneration (nAMD) with regard to real-life requirements. After the initial pivotal trials of anti-VEGF therapy, the variable dosing regimens pro re nata (PRN), Treat-and-Extend, and Observe-and-Plan, a recently introduced regimen, aimed to optimize the anti-VEGF treatment strategy for nAMD. The PRN regimen showed good visual results but requires monthly monitoring visits and can therefore be difficult to implement. Moreover, application of the PRN regimen revealed inferior results in real-life circumstances due to problems with resource allocation. The Treat-and-Extend regimen uses an interval based approach and has become widely accepted for its ease of preplanning and the reduced number of office visits required. The parallel development of the Observe-and-Plan regimen demonstrated that the future need for retreatment (interval) could be reliably predicted. Studies investigating the observe-and-plan regimen also showed that this could be used in individualized fixed treatment plans, allowing for dramatically reduced clinical burden and good outcomes, thus meeting the real life requirements. This progressive development of variable dosing regimens is a response to the real-life circumstances of limited human, technical, and financial resources. This includes an individualized treatment approach, optimization of the number of retreatments, a minimal number of monitoring visits, and ease of planning ahead. The Observe-and-Plan regimen achieves this goal with good functional results. Translational Relevance: This perspective reviews the process from the pivotal clinical trials to the development of treatment regimens which are adjusted to real life requirements. The article discusses this translational process which- although not the classical interpretation of translation from fundamental to clinical research, but a subsequent process after the pivotal clinical trials - represents an important translational step from the clinical proof of efficacy to optimization in terms of patients' and clinics' needs. The related scientific procedure includes the exploration of the concept, evaluation of security, and finally proof of efficacy.

  12. Characterizing the utility of the TMPA real-time product for hydrologic predictions over global river basins across scales

    NASA Astrophysics Data System (ADS)

    Gao, H.; Zhang, S.; Nijssen, B.; Zhou, T.; Voisin, N.; Sheffield, J.; Lee, K.; Shukla, S.; Lettenmaier, D. P.

    2017-12-01

    Despite its errors and uncertainties, the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis real-time product (TMPA-RT) has been widely used for hydrological monitoring and forecasting due to its timely availability for real-time applications. To evaluate the utility of TMPA-RT in hydrologic predictions, many studies have compared modeled streamflows driven by TMPA-RT against gauge data. However, because of the limited availability of streamflow observations in data sparse regions, there is still a lack of comprehensive comparisons for TMPA-RT based hydrologic predictions at the global scale. Furthermore, it is expected that its skill is less optimal at the subbasin scale than the basin scale. In this study, we evaluate and characterize the utility of the TMPA-RT product over selected global river basins during the period of 1998 to 2015 using the TMPA research product (TMPA-RP) as a reference. The Variable Infiltration Capacity (VIC) model, which was calibrated and validated previously, is adopted to simulate streamflows driven by TMPA-RT and TMPA-RP, respectively. The objective of this study is to analyze the spatial and temporal characteristics of the hydrologic predictions by answering the following questions: (1) How do the precipitation errors associated with the TMPA-RT product transform into streamflow errors with respect to geographical and climatological characteristics? (2) How do streamflow errors vary across scales within a basin?

  13. A prospective development study of software-guided radio-frequency ablation of primary and secondary liver tumors: Clinical intervention modelling, planning and proof for ablation cancer treatment (ClinicIMPPACT).

    PubMed

    Reinhardt, Martin; Brandmaier, Philipp; Seider, Daniel; Kolesnik, Marina; Jenniskens, Sjoerd; Sequeiros, Roberto Blanco; Eibisberger, Martin; Voglreiter, Philip; Flanagan, Ronan; Mariappan, Panchatcharam; Busse, Harald; Moche, Michael

    2017-12-01

    Radio-frequency ablation (RFA) is a promising minimal-invasive treatment option for early liver cancer, however monitoring or predicting the size of the resulting tissue necrosis during the RFA-procedure is a challenging task, potentially resulting in a significant rate of under- or over treatments. Currently there is no reliable lesion size prediction method commercially available. ClinicIMPPACT is designed as multicenter-, prospective-, non-randomized clinical trial to evaluate the accuracy and efficiency of innovative planning and simulation software. 60 patients with early liver cancer will be included at four European clinical institutions and treated with the same RFA system. The preinterventional imaging datasets will be used for computational planning of the RFA treatment. All ablations will be simulated simultaneously to the actual RFA procedure, using the software environment developed in this project. The primary outcome measure is the comparison of the simulated ablation zones with the true lesions shown in follow-up imaging after one month, to assess accuracy of the lesion prediction. This unique multicenter clinical trial aims at the clinical integration of a dedicated software solution to accurately predict lesion size and shape after radiofrequency ablation of liver tumors. Accelerated and optimized workflow integration, and real-time intraoperative image processing, as well as inclusion of patient specific information, e.g. organ perfusion and registration of the real RFA needle position might make the introduced software a powerful tool for interventional radiologists to optimize patient outcomes.

  14. Real-Time Monitoring of Results During First Year of Dutch Colorectal Cancer Screening Program and Optimization by Altering Fecal Immunochemical Test Cut-Off Levels.

    PubMed

    Toes-Zoutendijk, Esther; van Leerdam, Monique E; Dekker, Evelien; van Hees, Frank; Penning, Corine; Nagtegaal, Iris; van der Meulen, Miriam P; van Vuuren, Anneke J; Kuipers, Ernst J; Bonfrer, Johannes M G; Biermann, Katharina; Thomeer, Maarten G J; van Veldhuizen, Harriët; Kroep, Sonja; van Ballegooijen, Marjolein; Meijer, Gerrit A; de Koning, Harry J; Spaander, Manon C W; Lansdorp-Vogelaar, Iris

    2017-03-01

    After careful pilot studies and planning, the national screening program for colorectal cancer (CRC), with biennial fecal immunochemical tests (FITs), was initiated in The Netherlands in 2014. A national information system for real-time monitoring was developed to allow for timely evaluation. Data were collected from the first year of this screening program to determine the importance of planning and monitoring for optimal screening program performance. The national information system of the CRC screening program kept track of the number of invitations sent in 2014, FIT kits returned, and colonoscopies performed. Age-adjusted rates of participation, the number of positive test results, and positive predictive values (PPVs) for advanced neoplasia were determined weekly, quarterly, and yearly. In 2014, there were 741,914 persons invited for FIT; of these, 529,056 (71.3%; 95% CI, 71.2%-71.4%) participated. A few months into the program, real-time monitoring showed that rates of participation and positive test results (10.6%; 95% CI, 10.5%-10.8%) were higher than predicted and the PPV was lower (42.1%; 95% CI, 41.3%-42.9%) than predicted based on pilot studies. To reduce the burden of unnecessary colonoscopies and alleviate colonoscopy capacity, the cut-off level for a positive FIT result was increased from 15 to 47 μg Hb/g feces halfway through 2014. This adjustment decreased the percentage of positive test results to 6.7% (95% CI, 6.6%-6.8%) and increased the PPV to 49.1% (95% CI, 48.3%-49.9%). In total, the first year of the Dutch screening program resulted in the detection of 2483 cancers and 12,030 advanced adenomas. Close monitoring of the implementation of the Dutch national CRC screening program allowed for instant adjustment of the FIT cut-off levels to optimize program performance. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  15. Shear wave prediction using committee fuzzy model constrained by lithofacies, Zagros basin, SW Iran

    NASA Astrophysics Data System (ADS)

    Shiroodi, Sadjad Kazem; Ghafoori, Mohammad; Ansari, Hamid Reza; Lashkaripour, Golamreza; Ghanadian, Mostafa

    2017-02-01

    The main purpose of this study is to introduce the geological controlling factors in improving an intelligence-based model to estimate shear wave velocity from seismic attributes. The proposed method includes three main steps in the framework of geological events in a complex sedimentary succession located in the Persian Gulf. First, the best attributes were selected from extracted seismic data. Second, these attributes were transformed into shear wave velocity using fuzzy inference systems (FIS) such as Sugeno's fuzzy inference (SFIS), adaptive neuro-fuzzy inference (ANFIS) and optimized fuzzy inference (OFIS). Finally, a committee fuzzy machine (CFM) based on bat-inspired algorithm (BA) optimization was applied to combine previous predictions into an enhanced solution. In order to show the geological effect on improving the prediction, the main classes of predominate lithofacies in the reservoir of interest including shale, sand, and carbonate were selected and then the proposed algorithm was performed with and without lithofacies constraint. The results showed a good agreement between real and predicted shear wave velocity in the lithofacies-based model compared to the model without lithofacies especially in sand and carbonate.

  16. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    NASA Astrophysics Data System (ADS)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  17. Control strategy optimization of HVAC plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio

    In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less

  18. Predicting the amount of coke deposition on catalyst pellets through image analysis and soft computing

    NASA Astrophysics Data System (ADS)

    Zhang, Jingqiong; Zhang, Wenbiao; He, Yuting; Yan, Yong

    2016-11-01

    The amount of coke deposition on catalyst pellets is one of the most important indexes of catalytic property and service life. As a result, it is essential to measure this and analyze the active state of the catalysts during a continuous production process. This paper proposes a new method to predict the amount of coke deposition on catalyst pellets based on image analysis and soft computing. An image acquisition system consisting of a flatbed scanner and an opaque cover is used to obtain catalyst images. After imaging processing and feature extraction, twelve effective features are selected and two best feature sets are determined by the prediction tests. A neural network optimized by a particle swarm optimization algorithm is used to establish the prediction model of the coke amount based on various datasets. The root mean square error of the prediction values are all below 0.021 and the coefficient of determination R 2, for the model, are all above 78.71%. Therefore, a feasible, effective and precise method is demonstrated, which may be applied to realize the real-time measurement of coke deposition based on on-line sampling and fast image analysis.

  19. Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario

    NASA Astrophysics Data System (ADS)

    Moscadelli, M.; Diani, M.; Corsini, G.

    2017-10-01

    In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.

  20. Adaptive DIT-Based Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  1. Prediction of betavoltaic battery output parameters based on SEM measurements and Monte Carlo simulation.

    PubMed

    Yakimov, Eugene B

    2016-06-01

    An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Prediction in a visual language: real-time sentence processing in American Sign Language across development.

    PubMed

    Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I

    2018-01-01

    Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

  3. Dynamic modeling of green algae cultivation in a photobioreactor for sustainable biodiesel production.

    PubMed

    Del Rio-Chanona, Ehecatl A; Liu, Jiao; Wagner, Jonathan L; Zhang, Dongda; Meng, Yingying; Xue, Song; Shah, Nilay

    2018-02-01

    Biodiesel produced from microalgae has been extensively studied due to its potentially outstanding advantages over traditional transportation fuels. In order to facilitate its industrialization and improve the process profitability, it is vital to construct highly accurate models capable of predicting the complex behavior of the investigated biosystem for process optimization and control, which forms the current research goal. Three original contributions are described in this paper. Firstly, a dynamic model is constructed to simulate the complicated effect of light intensity, nutrient supply and light attenuation on both biomass growth and biolipid production. Secondly, chlorophyll fluorescence, an instantly measurable variable and indicator of photosynthetic activity, is embedded into the model to monitor and update model accuracy especially for the purpose of future process optimal control, and its correlation between intracellular nitrogen content is quantified, which to the best of our knowledge has never been addressed so far. Thirdly, a thorough experimental verification is conducted under different scenarios including both continuous illumination and light/dark cycle conditions to testify the model predictive capability particularly for long-term operation, and it is concluded that the current model is characterized by a high level of predictive capability. Based on the model, the optimal light intensity for algal biomass growth and lipid synthesis is estimated. This work, therefore, paves the way to forward future process design and real-time optimization. © 2017 Wiley Periodicals, Inc.

  4. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  5. Learning to Predict Social Influence in Complex Networks

    DTIC Science & Technology

    2012-03-29

    03/2010 – 17/03/2012 Abstract: First, we addressed the problem of analyzing information diffusion process in a social network using two kinds...algorithm which avoids the inner loop optimization during the search. We tested the performance using the structures of four real world networks, and...result of information diffusion that starts from the node. 2 We use “infected” and “activated” interchangeably. Efficient Discovery of Influential

  6. The improved business valuation model for RFID company based on the community mining method.

    PubMed

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company's net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies.

  7. The improved business valuation model for RFID company based on the community mining method

    PubMed Central

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company’s net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies. PMID:28459815

  8. Intelligent Control of Micro Grid: A Big Data-Based Control Center

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Wang, Yanping; Liu, Li; Wang, Zhiseng

    2018-01-01

    In this paper, a structure of micro grid system with big data-based control center is introduced. Energy data from distributed generation, storage and load are analized through the control center, and from the results new trends will be predicted and applied as a feedback to optimize the control. Therefore, each step proceeded in micro grid can be adjusted and orgnized in a form of comprehensive management. A framework of real-time data collection, data processing and data analysis will be proposed by employing big data technology. Consequently, a integrated distributed generation and a optimized energy storage and transmission process can be implemented in the micro grid system.

  9. Efficient prediction designs for random fields.

    PubMed

    Müller, Werner G; Pronzato, Luc; Rendas, Joao; Waldl, Helmut

    2015-03-01

    For estimation and predictions of random fields, it is increasingly acknowledged that the kriging variance may be a poor representative of true uncertainty. Experimental designs based on more elaborate criteria that are appropriate for empirical kriging (EK) are then often non-space-filling and very costly to determine. In this paper, we investigate the possibility of using a compound criterion inspired by an equivalence theorem type relation to build designs quasi-optimal for the EK variance when space-filling designs become unsuitable. Two algorithms are proposed, one relying on stochastic optimization to explicitly identify the Pareto front, whereas the second uses the surrogate criteria as local heuristic to choose the points at which the (costly) true EK variance is effectively computed. We illustrate the performance of the algorithms presented on both a simple simulated example and a real oceanographic dataset. © 2014 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.

  10. The alliance relationship analysis of international terrorist organizations with link prediction

    NASA Astrophysics Data System (ADS)

    Fang, Ling; Fang, Haiyang; Tian, Yanfang; Yang, Tinghong; Zhao, Jing

    2017-09-01

    Terrorism is a huge public hazard of the international community. Alliances of terrorist organizations may cause more serious threat to national security and world peace. Understanding alliances between global terrorist organizations will facilitate more effective anti-terrorism collaboration between governments. Based on publicly available data, this study constructed a alliance network between terrorist organizations and analyzed the alliance relationships with link prediction. We proposed a novel index based on optimal weighted fusion of six similarity indices, in which the optimal weight is calculated by genetic algorithm. Our experimental results showed that this algorithm could achieve better results on the networks than other algorithms. Using this method, we successfully digged out 21 real terrorist organizations alliance from current data. Our experiment shows that this approach used for terrorist organizations alliance mining is effective and this study is expected to benefit the form of a more powerful anti-terrorism strategy.

  11. Application of genetic algorithm to land use optimization for non-point source pollution control based on CLUE-S and SWAT

    NASA Astrophysics Data System (ADS)

    Wang, Qingrui; Liu, Ruimin; Men, Cong; Guo, Lijia

    2018-05-01

    The genetic algorithm (GA) was combined with the Conversion of Land Use and its Effect at Small regional extent (CLUE-S) model to obtain an optimized land use pattern for controlling non-point source (NPS) pollution. The performance of the combination was evaluated. The effect of the optimized land use pattern on the NPS pollution control was estimated by the Soil and Water Assessment Tool (SWAT) model and an assistant map was drawn to support the land use plan for the future. The Xiangxi River watershed was selected as the study area. Two scenarios were used to simulate the land use change. Under the historical trend scenario (Markov chain prediction), the forest area decreased by 2035.06 ha, and was mainly converted into paddy and dryland area. In contrast, under the optimized scenario (genetic algorithm (GA) prediction), up to 3370 ha of dryland area was converted into forest area. Spatially, the conversion of paddy and dryland into forest occurred mainly in the northwest and southeast of the watershed, where the slope land occupied a large proportion. The organic and inorganic phosphorus loads decreased by 3.6% and 3.7%, respectively, in the optimized scenario compared to those in the historical trend scenario. GA showed a better performance in optimized land use prediction. A comparison of the land use patterns in 2010 under the real situation and in 2020 under the optimized situation showed that Shennongjia and Shuiyuesi should convert 1201.76 ha and 1115.33 ha of dryland into forest areas, respectively, which represented the greatest changes in all regions in the watershed. The results of this study indicated that GA and the CLUE-S model can be used to optimize the land use patterns in the future and that SWAT can be used to evaluate the effect of land use optimization on non-point source pollution control. These methods may provide support for land use plan of an area.

  12. [On-line monitoring of biomass in 1,3-propanediol fermentation by Fourier-transformed near-infrared spectra analysis].

    PubMed

    Wang, Lu; Liu, Tao; Chen, Yang; Sun, Yaqin; Xiu, Zhilong

    2017-01-25

    Biomass is an important parameter reflecting the fermentation dynamics. Real-time monitoring of biomass can be used to control and optimize a fermentation process. To overcome the deficiencies of measurement delay and manual errors from offline measurement, we designed an experimental platform for online monitoring the biomass during a 1,3-propanediol fermentation process, based on using the fourier-transformed near-infrared (FT-NIR) spectra analysis. By pre-processing the real-time sampled spectra and analyzing the sensitive spectra bands, a partial least-squares algorithm was proposed to establish a dynamic prediction model for the biomass change during a 1,3-propanediol fermentation process. The fermentation processes with substrate glycerol concentrations of 60 g/L and 40 g/L were used as the external validation experiments. The root mean square error of prediction (RMSEP) obtained by analyzing experimental data was 0.341 6 and 0.274 3, respectively. These results showed that the established model gave good prediction and could be effectively used for on-line monitoring the biomass during a 1,3-propanediol fermentation process.

  13. Predicting the continuum between corridors and barriers to animal movements using Step Selection Functions and Randomized Shortest Paths.

    PubMed

    Panzacchi, Manuela; Van Moorter, Bram; Strand, Olav; Saerens, Marco; Kivimäki, Ilkka; St Clair, Colleen C; Herfindal, Ivar; Boitani, Luigi

    2016-01-01

    The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP outperforms models that assume either optimality or random walk. The proposed approach models the multiscale cognitive maps by which animals likely navigate real landscapes and generalizes the most common algorithms for identifying corridors. Because suboptimal, but non-random, movement strategies are likely widespread, our approach has the potential to predict more realistic corridor-barrier continua for a wide range of species. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  14. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph

  15. A Novel Approach of Battery Energy Storage for Improving Value of Wind Power in Deregulated Markets

    NASA Astrophysics Data System (ADS)

    Nguyen, Y. Minh; Yoon, Yong Tae

    2013-06-01

    Wind power producers face many regulation costs in deregulated environment, which remarkably lowers the value of wind power in comparison with the conventional sources. One of these costs is associated with the real-time variation of power output and being paid in frequency control market according to the variation band. In this regard, this paper presents a new approach to the scheduling and operation of battery energy storage installed in wind generation system. This approach depends on the statistic data of wind generation and the prediction of frequency control market prices to determine the optimal charging and discharging of batteries in real-time, which ultimately gives the minimum cost of frequency regulation for wind power producers. The optimization problem is formulated as the trade-off between the decrease in regulation payment and the increase in the cost of using battery energy storage. The approach is illustrated in the case study and the results of simulation show its effectiveness.

  16. Application of indoor noise prediction in the real world

    NASA Astrophysics Data System (ADS)

    Lewis, David N.

    2002-11-01

    Predicting indoor noise in industrial workrooms is an important part of the process of designing industrial plants. Predicted levels are used in the design process to determine compliance with occupational-noise regulations, and to estimate levels inside the walls in order to predict community noise radiated from the building. Once predicted levels are known, noise-control strategies can be developed. In this paper an overview of over 20 years of experience is given with the use of various prediction approaches to manage noise in Unilever plants. This work has applied empirical and ray-tracing approaches separately, and in combination, to design various packaging and production plants and other facilities. The advantages of prediction methods in general, and of the various approaches in particular, will be discussed. A case-study application of prediction methods to the optimization of noise-control measures in a food-packaging plant will be presented. Plans to acquire a simplified prediction model for use as a company noise-screening tool will be discussed.

  17. Oil shocks in New Keynesian models: Positive and normative implications

    NASA Astrophysics Data System (ADS)

    Chang, Jian

    Chapter 1 investigates optimal monetary policy response towards oil shocks in a New Keynesian model. We find that optimal policy, in general, becomes contractionary in response to an adverse oil shock. However, the optimal policy rule and the inflation-output trade-off depend on the specific structure of the model. The benchmark economy consists of a flexible-price energy sector and a sticky-price manufacturing sector where energy is used as an intermediate input. We show that optimal policy is to stabilize the sticky (core) price level. We then show that after incorporating a less oil-dependent sticky-price service sector, the model exhibits a trade-off in stabilizing prices and output gaps in the different sticky-price sectors. It predicts that central bank should not try to stabilize the core price level, and the economy will experience higher inflation and rising output gaps, even if central banks respond optimally. Chapter 2 addresses the observed volatility and persistence of real exchange rates and the terms of trade. It contributes to the literature with a quantitative study on the U.S. and Canada. A two-country New Keynesian model consisting of traded, non-traded, and oil production sectors is proposed to examine the time series properties of the real exchange rate, the terms of trade and the real oil price. We find that after incorporating several realistic features (namely oil price shocks, sector specific labor, non-traded goods, asymmetric pricing decisions of exporters and asymmetric consumer preferences over tradables), the benchmark model broadly matches the volatilities of the relative prices and some business cycle correlations. The model matches the data more closely after adding real demand shocks, suggesting their importance in explaining the relative price movements between the US and Canada. Chapter 3 explores several sources and transmission channels of international relative price movements. In particular, we elaborate on the role of imperfect labor mobility, pricing decisions of exporting firms, oil price shocks and asymmetric consumer preferences over tradables. Our results suggest that: Incorporating both producer currency pricing and local currency pricing assumptions produces more reasonable relative price movements. A model with imperfect labor mobility generates larger relative price volatility. Oil price shocks only contribute to terms of trade variability when oil is modeled as part of the traded basket. And asymmetric consumer preferences contribute to the volatility of the real exchange rate.

  18. Implementation of model predictive control for resistive wall mode stabilization on EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2015-10-01

    A model predictive control (MPC) method for stabilization of the resistive wall mode (RWM) in the EXTRAP T2R reversed-field pinch is presented. The system identification technique is used to obtain a linearized empirical model of EXTRAP T2R. MPC employs the model for prediction and computes optimal control inputs that satisfy performance criterion. The use of a linearized form of the model allows for compact formulation of MPC, implemented on a millisecond timescale, that can be used for real-time control. The design allows the user to arbitrarily suppress any selected Fourier mode. The experimental results from EXTRAP T2R show that the designed and implemented MPC successfully stabilizes the RWM.

  19. Development of glucose measurement system based on pulsed laser-induced ultrasonic method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Wan, Bin; Liu, Guodong; Xiong, Zhihua

    2016-09-01

    In this study, a kind of glucose measurement system based on pulsed-induced ultrasonic technique was established. In this system, the lateral detection mode was used, the Nd: YAG pumped optical parametric oscillator (OPO) pulsed laser was used as the excitation source, the high sensitivity ultrasonic transducer was used as the signal detector to capture the photoacoustic signals of the glucose. In the experiments, the real-time photoacoustic signals of glucose aqueous solutions with different concentrations were captured by ultrasonic transducer and digital oscilloscope. Moreover, the photoacoustic peak-to-peak values were gotten in the wavelength range from 1300nm to 2300nm. The characteristic absorption wavelengths of glucose were determined via the difference spectral method and second derivative method. In addition, the prediction models of predicting glucose concentrations were established via the multivariable linear regression algorithm and the optimal prediction model of corresponding optimal wavelengths. Results showed that the performance of the glucose system based on the pulsed-induced ultrasonic detection method was feasible. Therefore, the measurement scheme and prediction model have some potential value in the fields of non-invasive monitoring the concentration of the glucose gradient, especially in the food safety and biomedical fields.

  20. A New Artificial Neural Network Enhanced by the Shuffled Complex Evolution Optimization with Principal Component Analysis (SP-UCI) for Water Resources Management

    NASA Astrophysics Data System (ADS)

    Hayatbini, N.; Faridzad, M.; Yang, T.; Akbari Asanjan, A.; Gao, X.; Sorooshian, S.

    2016-12-01

    The Artificial Neural Networks (ANNs) are useful in many fields, including water resources engineering and management. However, due to the non-linear and chaotic characteristics associated with natural processes and human decision making, the use of ANNs in real-world applications is still limited, and its performance needs to be further improved for a broader practical use. The commonly used Back-Propagation (BP) scheme and gradient-based optimization in training the ANNs have already found to be problematic in some cases. The BP scheme and gradient-based optimization methods are associated with the risk of premature convergence, stuck in local optimums, and the searching is highly dependent on initial conditions. Therefore, as an alternative to BP and gradient-based searching scheme, we propose an effective and efficient global searching method, termed the Shuffled Complex Evolutionary Global optimization algorithm with Principal Component Analysis (SP-UCI), to train the ANN connectivity weights. Large number of real-world datasets are tested with the SP-UCI-based ANN, as well as various popular Evolutionary Algorithms (EAs)-enhanced ANNs, i.e., Particle Swarm Optimization (PSO)-, Genetic Algorithm (GA)-, Simulated Annealing (SA)-, and Differential Evolution (DE)-enhanced ANNs. Results show that SP-UCI-enhanced ANN is generally superior over other EA-enhanced ANNs with regard to the convergence and computational performance. In addition, we carried out a case study for hydropower scheduling in the Trinity Lake in the western U.S. In this case study, multiple climate indices are used as predictors for the SP-UCI-enhanced ANN. The reservoir inflows and hydropower releases are predicted up to sub-seasonal to seasonal scale. Results show that SP-UCI-enhanced ANN is able to achieve better statistics than other EAs-based ANN, which implies the usefulness and powerfulness of proposed SP-UCI-enhanced ANN for reservoir operation, water resources engineering and management. The SP-UCI-enhanced ANN is universally applicable to many other regression and prediction problems, and it has a good potential to be an alternative to the classical BP scheme and gradient-based optimization methods.

  1. The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eden, H.F.; Mooers, C.N.K.

    1990-06-01

    The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less

  2. Optimal Modality Selection for Cooperative Human-Robot Task Completion.

    PubMed

    Jacob, Mithun George; Wachs, Juan P

    2016-12-01

    Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p <; 0.05) in all metrics validating the predictability of the methodology. The methodology is validated in two scenarios (with and without modeling the risk of a human-robot collision) and the differences in the lexicons are analyzed.

  3. Optimizing Travel Time to Outpatient Interventional Radiology Procedures in a Multi-Site Hospital System Using a Google Maps Application.

    PubMed

    Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P

    2018-02-20

    The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.

  4. Correlations in state space can cause sub-optimal adaptation of optimal feedback control models.

    PubMed

    Aprasoff, Jonathan; Donchin, Opher

    2012-04-01

    Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can generate appropriate motor commands. Recent computational models of this process rely on the optimal feedback control (OFC) framework of control theory. OFC is a powerful tool for describing motor control, it does not describe adaptation. Some assume that adaptation of the forward model alone could explain motor adaptation, but this is widely understood to be overly simplistic. However, an adaptive optimal controller is difficult to implement. A reasonable alternative is to allow forward model adaptation to 're-tune' the controller. Our simulations show that, as expected, forward model adaptation alone does not produce optimal trajectories during reaching movements perturbed by force fields. However, they also show that re-optimizing the controller from the forward model can be sub-optimal. This is because, in a system with state correlations or redundancies, accurate prediction requires different information than optimal control. We find that adding noise to the movements that matches noise found in human data is enough to overcome this problem. However, since the state space for control of real movements is far more complex than in our simple simulations, the effects of correlations on re-adaptation of the controller from the forward model cannot be overlooked.

  5. Brief communication: Post-seismic landslides, the tough lesson of a catastrophe

    NASA Astrophysics Data System (ADS)

    Fan, Xuanmei; Xu, Qiang; Scaringi, Gianvito

    2018-01-01

    The rock avalanche that destroyed the village of Xinmo in Sichuan, China, on 24 June 2017, brought the issue of landslide risk and disaster chain management in highly seismic regions back into the spotlight. The long-term post-seismic behaviour of mountain slopes is complex and hardly predictable. Nevertheless, the integrated use of field monitoring, remote sensing and real-time predictive modelling can help to set up effective early warning systems, provide timely alarms, optimize rescue operations, and perform secondary hazard assessments. We believe that a comprehensive discussion on post-seismic slope stability and on its implications for policy makers can no longer be postponed.

  6. Adaptive MPC based on MIMO ARX-Laguerre model.

    PubMed

    Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais

    2017-03-01

    This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heredia-Langner, Alejandro; Amidan, Brett G.; Matzner, Shari

    We present results from the optimization of a re-identification process using two sets of biometric data obtained from the Civilian American and European Surface Anthropometry Resource Project (CAESAR) database. The datasets contain real measurements of features for 2378 individuals in a standing (43 features) and seated (16 features) position. A genetic algorithm (GA) was used to search a large combinatorial space where different features are available between the probe (seated) and gallery (standing) datasets. Results show that optimized model predictions obtained using less than half of the 43 gallery features and data from roughly 16% of the individuals available producemore » better re-identification rates than two other approaches that use all the information available.« less

  8. Impacts of Earth rotation parameters on GNSS ultra-rapid orbit prediction: Derivation and real-time correction

    NASA Astrophysics Data System (ADS)

    Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto

    2017-12-01

    Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.

  9. DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware.

    PubMed

    Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin; Choo, Kim-Kwang Raymond

    2016-01-01

    To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO).

  10. DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware

    PubMed Central

    Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin

    2016-01-01

    To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO). PMID:27611312

  11. Prediction of Depression in Cancer Patients With Different Classification Criteria, Linear Discriminant Analysis versus Logistic Regression.

    PubMed

    Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa

    2015-11-03

    Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups.

  12. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    PubMed

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Ren, Huaqing; Lu, Jie

    This paper reports several characterization methods of the properties of the uncured woven prepreg during the preforming process. The uniaxial tension, bias-extension, and bending tests are conducted to measure the in-plane properties of the material. The friction tests utilized to reveal the prepreg-prepreg and prepreg-forming tool interactions. All these tests are performed within the temperature range of the real manufacturing process. The results serve as the inputs to the numerical simulation for the product prediction and preforming process parameter optimization.

  14. Real-time eutrophication status evaluation of coastal waters using support vector machine with grid search algorithm.

    PubMed

    Kong, Xianyu; Sun, Yuyan; Su, Rongguo; Shi, Xiaoyong

    2017-06-15

    The development of techniques for real-time monitoring of the eutrophication status of coastal waters is of great importance for realizing potential cost savings in coastal monitoring programs and providing timely advice for marine health management. In this study, a GS optimized SVM was proposed to model relationships between 6 easily measured parameters (DO, Chl-a, C1, C2, C3 and C4) and the TRIX index for rapidly assessing marine eutrophication states of coastal waters. The good predictive performance of the developed method was indicated by the R 2 between the measured and predicted values (0.92 for the training dataset and 0.91 for the validation dataset) at a 95% confidence level. The classification accuracy of the eutrophication status was 86.5% for the training dataset and 85.6% for the validation dataset. The results indicated that it is feasible to develop an SVM technique for timely evaluation of the eutrophication status by easily measured parameters. Copyright © 2017. Published by Elsevier Ltd.

  15. Real-Time System for Water Modeling and Management

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhao, T.; David, C. H.; Minsker, B.

    2012-12-01

    Working closely with the Texas Commission on Environmental Quality (TCEQ) and the University of Texas at Austin (UT-Austin), we are developing a real-time system for water modeling and management using advanced cyberinfrastructure, data integration and geospatial visualization, and numerical modeling. The state of Texas suffered a severe drought in 2011 that cost the state $7.62 billion in agricultural losses (crops and livestock). Devastating situations such as this could potentially be avoided with better water modeling and management strategies that incorporate state of the art simulation and digital data integration. The goal of the project is to prototype a near-real-time decision support system for river modeling and management in Texas that can serve as a national and international model to promote more sustainable and resilient water systems. The system uses National Weather Service current and predicted precipitation data as input to the Noah-MP Land Surface model, which forecasts runoff, soil moisture, evapotranspiration, and water table levels given land surface features. These results are then used by a river model called RAPID, along with an error model currently under development at UT-Austin, to forecast stream flows in the rivers. Model forecasts are visualized as a Web application for TCEQ decision makers, who issue water diversion (withdrawal) permits and any needed drought restrictions; permit holders; and reservoir operation managers. Users will be able to adjust model parameters to predict the impacts of alternative curtailment scenarios or weather forecasts. A real-time optimization system under development will help TCEQ to identify optimal curtailment strategies to minimize impacts on permit holders and protect health and safety. To develop the system we have implemented RAPID as a remotely-executed modeling service using the Cyberintegrator workflow system with input data downloaded from the North American Land Data Assimilation System. The Cyberintegrator workflow system provides RESTful web services for users to provide inputs, execute workflows, and retrieve outputs. Along with REST endpoints, PAW (Publishable Active Workflows) provides the web user interface toolkit for us to develop web applications with scientific workflows. The prototype web application is built on top of workflows with PAW, so that users will have a user-friendly web environment to provide input parameters, execute the model, and visualize/retrieve the results using geospatial mapping tools. In future work the optimization model will be developed and integrated into the workflow.; Real-Time System for Water Modeling and Management

  16. Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons.

    PubMed

    Yaeli, Steve; Meir, Ron

    2010-01-01

    Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.

  17. Optimization of Straight Cylindrical Turning Using Artificial Bee Colony (ABC) Algorithm

    NASA Astrophysics Data System (ADS)

    Prasanth, Rajanampalli Seshasai Srinivasa; Hans Raj, Kandikonda

    2017-04-01

    Artificial bee colony (ABC) algorithm, that mimics the intelligent foraging behavior of honey bees, is increasingly gaining acceptance in the field of process optimization, as it is capable of handling nonlinearity, complexity and uncertainty. Straight cylindrical turning is a complex and nonlinear machining process which involves the selection of appropriate cutting parameters that affect the quality of the workpiece. This paper presents the estimation of optimal cutting parameters of the straight cylindrical turning process using the ABC algorithm. The ABC algorithm is first tested on four benchmark problems of numerical optimization and its performance is compared with genetic algorithm (GA) and ant colony optimization (ACO) algorithm. Results indicate that, the rate of convergence of ABC algorithm is better than GA and ACO. Then, the ABC algorithm is used to predict optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool nose radius to achieve good surface finish. Results indicate that, the ABC algorithm estimated a comparable surface finish when compared with real coded genetic algorithm and differential evolution algorithm.

  18. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    PubMed

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.

  19. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications

    PubMed Central

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971

  20. Probabilistic Cloning of Three Real States with Optimal Success Probabilities

    NASA Astrophysics Data System (ADS)

    Rui, Pin-shu

    2017-06-01

    We investigate the probabilistic quantum cloning (PQC) of three real states with average probability distribution. To get the analytic forms of the optimal success probabilities we assume that the three states have only two pairwise inner products. Based on the optimal success probabilities, we derive the explicit form of 1 →2 PQC for cloning three real states. The unitary operation needed in the PQC process is worked out too. The optimal success probabilities are also generalized to the M→ N PQC case.

  1. On-line self-learning time forward voltage prognosis for lithium-ion batteries using adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Fleischer, Christian; Waag, Wladislaw; Bai, Ziou; Sauer, Dirk Uwe

    2013-12-01

    The battery management system (BMS) of a battery-electric road vehicle must ensure an optimal operation of the electrochemical storage system to guarantee for durability and reliability. In particular, the BMS must provide precise information about the battery's state-of-functionality, i.e. how much dis-/charging power can the battery accept at current state and condition while at the same time preventing it from operating outside its safe operating area. These critical limits have to be calculated in a predictive manner, which serve as a significant input factor for the supervising vehicle energy management (VEM). The VEM must provide enough power to the vehicle's drivetrain for certain tasks and especially in critical driving situations. Therefore, this paper describes a new approach which can be used for state-of-available-power estimation with respect to lowest/highest cell voltage prediction using an adaptive neuro-fuzzy inference system (ANFIS). The estimated voltage for a given time frame in the future is directly compared with the actual voltage, verifying the effectiveness and accuracy of a relative voltage prediction error of less than 1%. Moreover, the real-time operating capability of the proposed algorithm was verified on a battery test bench while running on a real-time system performing voltage prediction.

  2. Multi-instance multi-label distance metric learning for genome-wide protein function prediction.

    PubMed

    Xu, Yonghui; Min, Huaqing; Song, Hengjie; Wu, Qingyao

    2016-08-01

    Multi-instance multi-label (MIML) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with not only multiple instances but also multiple class labels. To find an appropriate MIML learning method for genome-wide protein function prediction, many studies in the literature attempted to optimize objective functions in which dissimilarity between instances is measured using the Euclidean distance. But in many real applications, Euclidean distance may be unable to capture the intrinsic similarity/dissimilarity in feature space and label space. Unlike other previous approaches, in this paper, we propose to learn a multi-instance multi-label distance metric learning framework (MIMLDML) for genome-wide protein function prediction. Specifically, we learn a Mahalanobis distance to preserve and utilize the intrinsic geometric information of both feature space and label space for MIML learning. In addition, we try to deal with the sparsely labeled data by giving weight to the labeled data. Extensive experiments on seven real-world organisms covering the biological three-domain system (i.e., archaea, bacteria, and eukaryote; Woese et al., 1990) show that the MIMLDML algorithm is superior to most state-of-the-art MIML learning algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Modelling Size Structured Food Webs Using a Modified Niche Model with Two Predator Traits

    PubMed Central

    Klecka, Jan

    2014-01-01

    The structure of food webs is frequently described using phenomenological stochastic models. A prominent example, the niche model, was found to produce artificial food webs resembling real food webs according to a range of summary statistics. However, the size structure of food webs generated by the niche model and real food webs has not yet been rigorously compared. To fill this void, I use a body mass based version of the niche model and compare prey-predator body mass allometry and predator-prey body mass ratios predicted by the model to empirical data. The results show that the model predicts weaker size structure than observed in many real food webs. I introduce a modified version of the niche model which allows to control the strength of size-dependence of predator-prey links. In this model, optimal prey body mass depends allometrically on predator body mass and on a second trait, such as foraging mode. These empirically motivated extensions of the model allow to represent size structure of real food webs realistically and can be used to generate artificial food webs varying in several aspects of size structure in a controlled way. Hence, by explicitly including the role of species traits, this model provides new opportunities for simulating the consequences of size structure for food web dynamics and stability. PMID:25119999

  4. Integrating Predictive Modeling with Control System Design for Managed Aquifer Recharge and Recovery Applications

    NASA Astrophysics Data System (ADS)

    Drumheller, Z. W.; Regnery, J.; Lee, J. H.; Illangasekare, T. H.; Kitanidis, P. K.; Smits, K. M.

    2014-12-01

    Aquifers around the world show troubling signs of irreversible depletion and seawater intrusion as climate change, population growth, and urbanization led to reduced natural recharge rates and overuse. Scientists and engineers have begun to re-investigate the technology of managed aquifer recharge and recovery (MAR) as a means to increase the reliability of the diminishing and increasingly variable groundwater supply. MAR systems offer the possibility of naturally increasing groundwater storage while improving the quality of impaired water used for recharge. Unfortunately, MAR systems remain wrought with operational challenges related to the quality and quantity of recharged and recovered water stemming from a lack of data-driven, real-time control. Our project seeks to ease the operational challenges of MAR facilities through the implementation of active sensor networks, adaptively calibrated flow and transport models, and simulation-based meta-heuristic control optimization methods. The developed system works by continually collecting hydraulic and water quality data from a sensor network embedded within the aquifer. The data is fed into an inversion algorithm, which calibrates the parameters and initial conditions of a predictive flow and transport model. The calibrated model is passed to a meta-heuristic control optimization algorithm (e.g. genetic algorithm) to execute the simulations and determine the best course of action, i.e., the optimal pumping policy for current aquifer conditions. The optimal pumping policy is manually or autonomously applied. During operation, sensor data are used to assess the accuracy of the optimal prediction and augment the pumping strategy as needed. At laboratory-scale, a small (18"H x 46"L) and an intermediate (6'H x 16'L) two-dimensional synthetic aquifer were constructed and outfitted with sensor networks. Data collection and model inversion components were developed and sensor data were validated by analytical measurements.

  5. Aggregation Pheromone System: A Real-parameter Optimization Algorithm using Aggregation Pheromones as the Base Metaphor

    NASA Astrophysics Data System (ADS)

    Tsutsui, Shigeyosi

    This paper proposes an aggregation pheromone system (APS) for solving real-parameter optimization problems using the collective behavior of individuals which communicate using aggregation pheromones. APS was tested on several test functions used in evolutionary computation. The results showed APS could solve real-parameter optimization problems fairly well. The sensitivity analysis of control parameters of APS is also studied.

  6. Addressing the minimum fleet problem in on-demand urban mobility.

    PubMed

    Vazifeh, M M; Santi, P; Resta, G; Strogatz, S H; Ratti, C

    2018-05-01

    Information and communication technologies have opened the way to new solutions for urban mobility that provide better ways to match individuals with on-demand vehicles. However, a fundamental unsolved problem is how best to size and operate a fleet of vehicles, given a certain demand for personal mobility. Previous studies 1-5 either do not provide a scalable solution or require changes in human attitudes towards mobility. Here we provide a network-based solution to the following 'minimum fleet problem', given a collection of trips (specified by origin, destination and start time), of how to determine the minimum number of vehicles needed to serve all the trips without incurring any delay to the passengers. By introducing the notion of a 'vehicle-sharing network', we present an optimal computationally efficient solution to the problem, as well as a nearly optimal solution amenable to real-time implementation. We test both solutions on a dataset of 150 million taxi trips taken in the city of New York over one year 6 . The real-time implementation of the method with near-optimal service levels allows a 30 per cent reduction in fleet size compared to current taxi operation. Although constraints on driver availability and the existence of abnormal trip demands may lead to a relatively larger optimal value for the fleet size than that predicted here, the fleet size remains robust for a wide range of variations in historical trip demand. These predicted reductions in fleet size follow directly from a reorganization of taxi dispatching that could be implemented with a simple urban app; they do not assume ride sharing 7-9 , nor require changes to regulations, business models, or human attitudes towards mobility to become effective. Our results could become even more relevant in the years ahead as fleets of networked, self-driving cars become commonplace 10-14 .

  7. The Subseasonal Experiment (SubX) to Advance National Weather Service Predictions for Weeks 3-4

    NASA Astrophysics Data System (ADS)

    Mariotti, A.; Barrie, D.; Archambault, H. M.

    2017-12-01

    There is great practical interest in developing skillful predictions of extremes for lead times extending beyond the two-week theoretical predictability skill barrier for weather forecasts to the subseasonal-to-seasonal (S2S) time scale. The processes and phenomena specific to S2S are posited to require a unified approach to science, modeling, and predictions that draws expertise from both the weather and climate/seasonal communities. Based on this premise, in 2016, the NOAA Climate Program Office Modeling, Analysis, Predictions and Projections (MAPP) program, in partnership with the National Weather Service Office of Science and Technology Integration, launched a major research and transition initiative to meet NOAA's emerging research and transition needs for developing skillful S2S predictions. A major component of this initiative is an experiment to test single- and multi-model ensembles for subseasonal prediction, called the Subseasonal Experiment (SubX). SubX, which engages six modeling groups, is producing real time experimental forecasts based on weather, climate, and Earth system models for weeks 3-4. The project investigators are evaluating, testing, and optimizing this system, and the hindcast and real time forecast data are available to the broad community. SubX research is targeted at a number of important decision-making contexts including drought and extremes, as well as the broad variety of phenomena that are meaningful at subseasonal timescales (e.g., MJO, ENSO, stratosphere/troposphere coupling, etc.). This presentation will discuss the design and status of SubX in the broader context of MAPP program S2S prediction research.

  8. Regression modeling and prediction of road sweeping brush load characteristics from finite element analysis and experimental results.

    PubMed

    Wang, Chong; Sun, Qun; Wahab, Magd Abdel; Zhang, Xingyu; Xu, Limin

    2015-09-01

    Rotary cup brushes mounted on each side of a road sweeper undertake heavy debris removal tasks but the characteristics have not been well known until recently. A Finite Element (FE) model that can analyze brush deformation and predict brush characteristics have been developed to investigate the sweeping efficiency and to assist the controller design. However, the FE model requires large amount of CPU time to simulate each brush design and operating scenario, which may affect its applications in a real-time system. This study develops a mathematical regression model to summarize the FE modeled results. The complex brush load characteristic curves were statistically analyzed to quantify the effects of cross-section, length, mounting angle, displacement and rotational speed etc. The data were then fitted by a multiple variable regression model using the maximum likelihood method. The fitted results showed good agreement with the FE analysis results and experimental results, suggesting that the mathematical regression model may be directly used in a real-time system to predict characteristics of different brushes under varying operating conditions. The methodology may also be used in the design and optimization of rotary brush tools. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Coupling of EIT with computational lung modeling for predicting patient-specific ventilatory responses.

    PubMed

    Roth, Christian J; Becher, Tobias; Frerichs, Inéz; Weiler, Norbert; Wall, Wolfgang A

    2017-04-01

    Providing optimal personalized mechanical ventilation for patients with acute or chronic respiratory failure is still a challenge within a clinical setting for each case anew. In this article, we integrate electrical impedance tomography (EIT) monitoring into a powerful patient-specific computational lung model to create an approach for personalizing protective ventilatory treatment. The underlying computational lung model is based on a single computed tomography scan and able to predict global airflow quantities, as well as local tissue aeration and strains for any ventilation maneuver. For validation, a novel "virtual EIT" module is added to our computational lung model, allowing to simulate EIT images based on the patient's thorax geometry and the results of our numerically predicted tissue aeration. Clinically measured EIT images are not used to calibrate the computational model. Thus they provide an independent method to validate the computational predictions at high temporal resolution. The performance of this coupling approach has been tested in an example patient with acute respiratory distress syndrome. The method shows good agreement between computationally predicted and clinically measured airflow data and EIT images. These results imply that the proposed framework can be used for numerical prediction of patient-specific responses to certain therapeutic measures before applying them to an actual patient. In the long run, definition of patient-specific optimal ventilation protocols might be assisted by computational modeling. NEW & NOTEWORTHY In this work, we present a patient-specific computational lung model that is able to predict global and local ventilatory quantities for a given patient and any selected ventilation protocol. For the first time, such a predictive lung model is equipped with a virtual electrical impedance tomography module allowing real-time validation of the computed results with the patient measurements. First promising results obtained in an acute respiratory distress syndrome patient show the potential of this approach for personalized computationally guided optimization of mechanical ventilation in future. Copyright © 2017 the American Physiological Society.

  10. Characterizing metabolic pathway diversification in the context of perturbation size.

    PubMed

    Yang, Laurence; Srinivasan, Shyamsundhar; Mahadevan, Radhakrishnan; Cluett, William R

    2015-03-01

    Cell metabolism is an important platform for sustainable biofuel, chemical and pharmaceutical production but its complexity presents a major challenge for scientists and engineers. Although in silico strains have been designed in the past with predicted performances near the theoretical maximum, real-world performance is often sub-optimal. Here, we simulate how strain performance is impacted when subjected to many randomly varying perturbations, including discrepancies between gene expression and in vivo flux, osmotic stress, and substrate uptake perturbations due to concentration gradients in bioreactors. This computational study asks whether robust performance can be achieved by adopting robustness-enhancing mechanisms from naturally evolved organisms-in particular, redundancy. Our study shows that redundancy, typically perceived as a ubiquitous robustness-enhancing strategy in nature, can either improve or undermine robustness depending on the magnitude of the perturbations. We also show that the optimal number of redundant pathways used can be predicted for a given perturbation size. Copyright © 2015. Published by Elsevier Inc.

  11. Prediction of far-field wind turbine noise propagation with parabolic equation.

    PubMed

    Lee, Seongkyu; Lee, Dongjai; Honhoff, Saskia

    2016-08-01

    Sound propagation of wind farms is typically simulated by the use of engineering tools that are neglecting some atmospheric conditions and terrain effects. Wind and temperature profiles, however, can affect the propagation of sound and thus the perceived sound in the far field. A better understanding and application of those effects would allow a more optimized farm operation towards meeting noise regulations and optimizing energy yield. This paper presents the parabolic equation (PE) model development for accurate wind turbine noise propagation. The model is validated against analytic solutions for a uniform sound speed profile, benchmark problems for nonuniform sound speed profiles, and field sound test data for real environmental acoustics. It is shown that PE provides good agreement with the measured data, except upwind propagation cases in which turbulence scattering is important. Finally, the PE model uses computational fluid dynamics results as input to accurately predict sound propagation for complex flows such as wake flows. It is demonstrated that wake flows significantly modify the sound propagation characteristics.

  12. A study of power cycles using supercritical carbon dioxide as the working fluid

    NASA Astrophysics Data System (ADS)

    Schroder, Andrew Urban

    A real fluid heat engine power cycle analysis code has been developed for analyzing the zero dimensional performance of a general recuperated, recompression, precompression supercritical carbon dioxide power cycle with reheat and a unique shaft configuration. With the proposed shaft configuration, several smaller compressor-turbine pairs could be placed inside of a pressure vessel in order to avoid high speed, high pressure rotating seals. The small compressor-turbine pairs would share some resemblance with a turbocharger assembly. Variation in fluid properties within the heat exchangers is taken into account by discretizing zero dimensional heat exchangers. The cycle analysis code allows for multiple reheat stages, as well as an option for the main compressor to be powered by a dedicated turbine or an electrical motor. Variation in performance with respect to design heat exchanger pressure drops and minimum temperature differences, precompressor pressure ratio, main compressor pressure ratio, recompression mass fraction, main compressor inlet pressure, and low temperature recuperator mass fraction have been explored throughout a range of each design parameter. Turbomachinery isentropic efficiencies are implemented and the sensitivity of the cycle performance and the optimal design parameters is explored. Sensitivity of the cycle performance and optimal design parameters is studied with respect to the minimum heat rejection temperature and the maximum heat addition temperature. A hybrid stochastic and gradient based optimization technique has been used to optimize critical design parameters for maximum engine thermal efficiency. A parallel design exploration mode was also developed in order to rapidly conduct the parameter sweeps in this design space exploration. A cycle thermal efficiency of 49.6% is predicted with a 320K [47°C] minimum temperature and 923K [650°C] maximum temperature. The real fluid heat engine power cycle analysis code was expanded to study a theoretical recuperated Lenoir cycle using supercritical carbon dioxide as the working fluid. The real fluid cycle analysis code was also enhanced to study a combined cycle engine cascade. Two engine cascade configurations were studied. The first consisted of a traditional open loop gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 65.0% using a peak temperature of 1,890K [1,617°C]. The second configuration consisted of a hybrid natural gas powered solid oxide fuel cell and gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 73.1%. Both configurations had a minimum temperature of 306K [33°C]. The hybrid stochastic and gradient based optimization technique was used to optimize all engine design parameters for each engine in the cascade such that the entire engine cascade achieved the maximum thermal efficiency. The parallel design exploration mode was also utilized in order to understand the impact of different design parameters on the overall engine cascade thermal efficiency. Two dimensional conjugate heat transfer (CHT) numerical simulations of a straight, equal height channel heat exchanger using supercritical carbon dioxide were conducted at various Reynolds numbers and channel lengths.

  13. Exchange inlet optimization by genetic algorithm for improved RBCC performance

    NASA Astrophysics Data System (ADS)

    Chorkawy, G.; Etele, J.

    2017-09-01

    A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.

  14. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on-line. The optimal avoidance trajectory is implemented as a receding-horizon model predictive control law. Therefore, at each time step, the optimal avoidance trajectory is found and the first time step of its acceleration is applied. At the next time step of the control computer, the problem is re-solved and the new first time step is again applied. This continual updating allows the RCA algorithm to adapt to a colliding spacecraft that is making erratic course changes.

  15. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.

  16. Predicting Sets and Lists: Theory and Practice

    DTIC Science & Technology

    2015-01-01

    school. No work stands in isolation and this work would not have been possible without my co-authors: • “Contextual Optimization of Lists”: Tommy Liu... IMU Microstrain 3DM-GX3-25 PlayStation Eye camera (640x480 @ 30Hz) Onboard ARM-based Linux computer PlayStation Eye camera (640x480 @ 30Hz) Bumblebee...of the IMU integrated in the Ardupilot unit, we added a Microstrain 3DM-GX3-25 IMU which is used to aid real time pose estimation. There are two

  17. Genome-Wide Chromosomal Targets of Oncogenic Transcription Factors

    DTIC Science & Technology

    2008-04-01

    axis. (a) Comparison between STAGE and ChIP-chip when the same sample was analyzed by both methods. The gray line indicates all predicted STAGE targets...numbers of single-hit tags (Y-axis) were plotted against the frequen- cies of those tags in the random ( gray bars) and experimental (black bars) tag...size of 500 bp gave an optimal separation between random and real data. Data shown is for a window size of 500 bp. The gray bars indicate log10 of the

  18. Optimizing Wastewater Reuse in Agricultural Fields via Merging of Embedded Network Sensor Data and Flow and Transport Models Using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Wu, C.; Margulis, S. A.

    2007-12-01

    Wastewater re-use via crop irrigation has the potential to be an effective means of wastewater disposal. However, nitrate in wastewater may contaminate groundwater if it does not decay before reaching the groundwater table. In order to dispose of wastewater while preventing long-term groundwater pollution, irrigation rates need to be optimized based on the current and predicted states of the soil, such as soil moisture content and/or nitrate concentration. A real-time soil states estimation system using the Ensemble Kalman Filter (EnKF) has been developed for application to a test bed for wastewater re-use in Palmdale, CA. This test bed, covered with alfalfa, is a 30-acre irrigation plot with a 200-meter long rotating pivot arm that irrigates the area with reclaimed wastewater. A sensor network is deployed in the soil near the surface. The data assimilation system has shown the ability to characterize soil states and fluxes from sparse measurements. The real-time estimation system will then be used to explore the potential feedback for optimizing the sprinkler operation (i.e. maximizing the magnitude of wastewater release while minimizing the ultimate groundwater pollution). In optimization models, soil states and fluxes can be regarded as functions of irrigation rate. Through optimization, the irrigation rate in a finite horizon can be maximized while still satisfying all criteria in soil states and fluxes to ensure the safety of groundwater. Since the data assimilation system provides reliable estimation of soil states and fluxes, it is expected to define the optimal irrigation rate with higher confidence compared to using models or sensors only.

  19. Combination of acoustical radiosity and the image source method.

    PubMed

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho; Jacobsen, Finn

    2013-06-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part. The model is based on conservation of acoustical energy. Losses are taken into account by the energy absorption coefficient, and the diffuse reflections are controlled via the scattering coefficient, which defines the portion of energy that has been diffusely reflected. The way the model is formulated allows for a dynamic control of the image source production, so that no fixed maximum reflection order is required. The model is optimized for energy impulse response predictions in arbitrary polyhedral rooms. The predictions are validated by comparison with published measured data for a real music studio hall. The proposed model turns out to be promising for acoustic predictions providing a high level of detail and accuracy.

  20. Improving Mid-Course Flight Through an Application of Real-Time Optimal Control

    DTIC Science & Technology

    2017-12-01

    COURSE FLIGHT THROUGH AN APPLICATION OF REAL- TIME OPTIMAL CONTROL by Mark R. Roncoroni December 2017 Thesis Advisor: Ronald Proulx Co...collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources...AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPROVING MID-COURSE FLIGHT THROUGH AN APPLICATION OF REAL- TIME OPTIMAL CONTROL 5. FUNDING

  1. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  2. The development of a Kalman filter clock predictor

    NASA Technical Reports Server (NTRS)

    Davis, John A.; Greenhall, Charles A.; Boudjemaa, Redoane

    2005-01-01

    A Kalman filter based clock predictor is developed, and its performance evaluated using both simulated and real data. The clock predictor is shown to possess a neat to optimal Prediction Error Variance (PEV) when the underlying noise consists of one of the power law noise processes commonly encountered in time and frequency measurements. The predictor's performance is the presence of multiple noise processes is also examined. The relationship between the PEV obtained in the presence of multiple noise processes and those obtained for the individual component noise processes is examined. Comparisons are made with a simple linear clock predictor. The clock predictor is used to predict future values of the time offset between pairs of NPL's active hydrogen masers.

  3. A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo

    DOE PAGES

    Zhao, Luning; Neuscamman, Eric

    2017-05-17

    We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less

  4. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    PubMed

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2017-10-01

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  5. Optimal SVM parameter selection for non-separable and unbalanced datasets.

    PubMed

    Jiang, Peng; Missoum, Samy; Chen, Zhao

    2014-10-01

    This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced accuracy. These validation metrics are tested using computational data only, which enables the creation of fully separable sets of data. This way, non-separable datasets, representative of a real-world problem, can be created by projection onto a lower dimensional sub-space. The knowledge of the separable dataset, unknown in real-world problems, provides a reference to compare the three validation metrics using a quantity referred to as the "weighted likelihood". As an application example, the study investigates a classification model for hip fracture prediction. The data is obtained from a parameterized finite element model of a femur. The performance of the various validation metrics is studied for several levels of separability, ratios of unbalance, and training set sizes.

  6. Closed-Loop Optimal Control Implementations for Space Applications

    DTIC Science & Technology

    2016-12-01

    analyses of a series of optimal control problems, several real- time optimal control algorithms are developed that continuously adapt to feedback on the...through the analyses of a series of optimal control problems, several real- time optimal control algorithms are developed that continuously adapt to...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering

  7. Prediction of muscle performance during dynamic repetitive movement

    NASA Technical Reports Server (NTRS)

    Byerly, D. L.; Byerly, K. A.; Sognier, M. A.; Squires, W. G.

    2003-01-01

    BACKGROUND: During long-duration spaceflight, astronauts experience progressive muscle atrophy and often perform strenuous extravehicular activities. Post-flight, there is a lengthy recovery period with an increased risk for injury. Currently, there is a critical need for an enabling tool to optimize muscle performance and to minimize the risk of injury to astronauts while on-orbit and during post-flight recovery. Consequently, these studies were performed to develop a method to address this need. METHODS: Eight test subjects performed a repetitive dynamic exercise to failure at 65% of their upper torso weight using a Lordex spinal machine. Surface electromyography (SEMG) data was collected from the erector spinae back muscle. The SEMG data was evaluated using a 5th order autoregressive (AR) model and linear regression analysis. RESULTS: The best predictor found was an AR parameter, the mean average magnitude of AR poles, with r = 0.75 and p = 0.03. This parameter can predict performance to failure as early as the second repetition of the exercise. CONCLUSION: A method for predicting human muscle performance early during dynamic repetitive exercise was developed. The capability to predict performance to failure has many potential applications to the space program including evaluating countermeasure effectiveness on-orbit, optimizing post-flight recovery, and potential future real-time monitoring capability during extravehicular activity.

  8. VLBI real-time analysis by Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Karbon, M.; Nilsson, T.; Soja, B.; Heinkelmann, R.; Raposo-Pulido, V.; Schuh, H.

    2013-12-01

    Geodetic Very Long Baseline Interferometry (VLBI) is one of the primary space geodetic techniques providing the full set of Earth Orientation Parameter (EOP) and is unique for observing long term Universal Time (UT1) and precession/nutation. Accurate and continuous EOP obtained in near real-time are essential for satellite based navigation and positioning and for enabling the precise tracking of interplanetary spacecrafts. To meet this necessity the International VLBI Service for Geodesy and Astrometry (IVS) increased its efforts to reduce the time span between the VLBI observations and the availability of the final results. Currently the timeliness is about two weeks, but the goal is to reduce it to less than one day with the future VGOS (VLBI2010 Global Observing System) network. The FWF project VLBI-ART contributes to this new generation VLBI system by considerably accelerating the VLBI analysis procedure through the implementation of an elaborate Kalman filter. This true real-time Kalman filter will be embedded in the Vienna VLBI Software (VieVS) as a completely automated tool with no need of human interaction. This filter also allows the prediction and combination of EOP from various space geodetic techniques by implementing stochastic models to statistically account for unpredictable changes in EOP. Additionally, atmospheric angular momenta calculated from numerical weather prediction models are introduced to support the short-term EOP prediction. To optimize the performance of the new software various investigations with real as well as simulated data are foreseen. The results are compared to the ones obtained by conventional VLBI parameter estimation methods (e.g. least squares method) and to corresponding parameter series from other techniques, such as from the Global Navigation Satellite Systems (GNSS).

  9. [Comparison of two quantitative methods of endobronchial ultrasound real-time elastography for evaluating intrathoracic lymph nodes].

    PubMed

    Mao, X W; Yang, J Y; Zheng, X X; Wang, L; Zhu, L; Li, Y; Xiong, H K; Sun, J Y

    2017-06-12

    Objective: To compare the clinical value of two quantitative methods in analyzing endobronchial ultrasound real-time elastography (EBUS-RTE) images for evaluating intrathoracic lymph nodes. Methods: From January 2014 to April 2014, EBUS-RTE examination was performed in patients who received EBUS-TBNA examination in Shanghai Chest Hospital. Each intrathoracic lymph node had a selected EBUS-RTE image. Stiff area ratio and mean hue value of region of interest (ROI) in each image were calculated respectively. The final diagnosis of lymph node was based on the pathologic/microbiologic results of EBUS-TBNA, pathologic/microbiologic results of other examinations and clinical following-up. The sensitivity, specificity, positive predictive value, negative predictive value and accuracy were evaluated for distinguishing malignant and benign lesions. Results: Fifty-six patients and 68 lymph nodes were enrolled in this study, of which 35 lymph nodes were malignant and 33 lymph nodes were benign. The stiff area ratio and mean hue value of benign and malignant lesions were 0.32±0.29, 0.62±0.20 and 109.99±28.13, 141.62±17.52, respectively, and statistical differences were found in both of those two methods ( t =-5.14, P <0.01; t =-5.53, P <0.01). The area under curves was 0.813, 0.814 in stiff area ratio and mean hue value, respectively. The optimal diagnostic cut-off value of stiff area ratio was 0.48, and the sensitivity, specificity, positive predictive value, negative predictive value and accuracy were 82.86%, 81.82%, 82.86%, 81.82% and 82.35%, respectively. The optimal diagnostic cut-off value of mean hue value was 126.28, and the sensitivity, specificity, positive predictive value, negative predictive value and accuracy were 85.71%, 75.76%, 78.95%, 83.33% and 80.88%, respectively. Conclusion: Both the stiff area ratio and mean hue value methods can be used for analyzing EBUS-RTE images quantitatively, having the value of differentiating benign and malignant intrathoracic lymph nodes, and the stiff area ratio is better than the mean hue value between the two methods.

  10. Predicting Visual Distraction Using Driving Performance Data

    PubMed Central

    Kircher, Katja; Ahlstrom, Christer

    2010-01-01

    Behavioral variables are often used as performance indicators (PIs) of visual or internal distraction induced by secondary tasks. The objective of this study is to investigate whether visual distraction can be predicted by driving performance PIs in a naturalistic setting. Visual distraction is here defined by a gaze based real-time distraction detection algorithm called AttenD. Seven drivers used an instrumented vehicle for one month each in a small scale field operational test. For each of the visual distraction events detected by AttenD, seven PIs such as steering wheel reversal rate and throttle hold were calculated. Corresponding data were also calculated for time periods during which the drivers were classified as attentive. For each PI, means between distracted and attentive states were calculated using t-tests for different time-window sizes (2 – 40 s), and the window width with the smallest resulting p-value was selected as optimal. Based on the optimized PIs, logistic regression was used to predict whether the drivers were attentive or distracted. The logistic regression resulted in predictions which were 76 % correct (sensitivity = 77 % and specificity = 76 %). The conclusion is that there is a relationship between behavioral variables and visual distraction, but the relationship is not strong enough to accurately predict visual driver distraction. Instead, behavioral PIs are probably best suited as complementary to eye tracking based algorithms in order to make them more accurate and robust. PMID:21050615

  11. 3D Protein structure prediction with genetic tabu search algorithm

    PubMed Central

    2010-01-01

    Background Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. Results In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. Conclusions The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic algorithm by using the flexible memory functions of TS. Compared with some previous algorithms, GATS algorithm has better performance in global optimization and can predict 3D protein structure more effectively. PMID:20522256

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Luning; Neuscamman, Eric

    We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less

  13. Using real time traveler demand data to optimize commuter rail feeder systems.

    DOT National Transportation Integrated Search

    2012-08-01

    "This report focuses on real time optimization of the Commuter Rail Circulator Route Network Design Problem (CRCNDP). The route configuration of the circulator system where to stop and the route among the stops is determined on a real-time ba...

  14. A problem of optimal control and observation for distributed homogeneous multi-agent system

    NASA Astrophysics Data System (ADS)

    Kruglikov, Sergey V.

    2017-12-01

    The paper considers the implementation of a algorithm for controlling a distributed complex of several mobile multi-robots. The concept of a unified information space of the controlling system is applied. The presented information and mathematical models of participants and obstacles, as real agents, and goals and scenarios, as virtual agents, create the base forming the algorithmic and software background for computer decision support system. The controlling scheme assumes the indirect management of the robotic team on the basis of optimal control and observation problem predicting intellectual behavior in a dynamic, hostile environment. A basic content problem is a compound cargo transportation by a group of participants in the case of a distributed control scheme in the terrain with multiple obstacles.

  15. Integration and Assessment of Component Health Prognostics in Supervisory Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Bonebrake, Christopher A.; Dib, Gerges

    Enhanced risk monitors (ERMs) for active components in advanced reactor concepts use predictive estimates of component failure to update, in real time, predictive safety and economic risk metrics. These metrics have been shown to be capable of use in optimizing maintenance scheduling and managing plant maintenance costs. Integrating this information with plant supervisory control systems increases the potential for making control decisions that utilize real-time information on component conditions. Such decision making would limit the possibility of plant operations that increase the likelihood of degrading the functionality of one or more components while maintaining the overall functionality of the plant.more » ERM uses sensor data for providing real-time information about equipment condition for deriving risk monitors. This information is used to estimate the remaining useful life and probability of failure of these components. By combining this information with plant probabilistic risk assessment models, predictive estimates of risk posed by continued plant operation in the presence of detected degradation may be estimated. In this paper, we describe this methodology in greater detail, and discuss its integration with a prototypic software-based plant supervisory control platform. In order to integrate these two technologies and evaluate the integrated system, software to simulate the sensor data was developed, prognostic models for feedwater valves were developed, and several use cases defined. The full paper will describe these use cases, and the results of the initial evaluation.« less

  16. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction

    PubMed Central

    Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda

    2017-01-01

    Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results. PMID:28125609

  17. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction.

    PubMed

    Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda

    2017-01-01

    Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.

  18. Rapid determination of sugar level in snack products using infrared spectroscopy.

    PubMed

    Wang, Ting; Rodriguez-Saona, Luis E

    2012-08-01

    Real-time spectroscopic methods can provide a valuable window into food manufacturing to permit optimization of production rate, quality and safety. There is a need for cutting edge sensor technology directed at improving efficiency, throughput and reliability of critical processes. The aim of the research was to evaluate the feasibility of infrared systems combined with chemometric analysis to develop rapid methods for determination of sugars in cereal products. Samples were ground and spectra were collected using a mid-infrared (MIR) spectrometer equipped with a triple-bounce ZnSe MIRacle attenuated total reflectance accessory or Fourier transform near infrared (NIR) system equipped with a diffuse reflection-integrating sphere. Sugar contents were determined using a reference HPLC method. Partial least squares regression (PLSR) was used to create cross-validated calibration models. The predictability of the models was evaluated on an independent set of samples and compared with reference techniques. MIR and NIR spectra showed characteristic absorption bands for sugars, and generated excellent PLSR models (sucrose: SEP < 1.7% and r > 0.96). Multivariate models accurately and precisely predicted sugar level in snacks allowing for rapid analysis. This simple technique allows for reliable prediction of quality parameters, and automation enabling food manufacturers for early corrective actions that will ultimately save time and money while establishing a uniform quality. The U.S. snack food industry generates billions of dollars in revenue each year and vibrational spectroscopic methods combined with pattern recognition analysis could permit optimization of production rate, quality, and safety of many food products. This research showed that infrared spectroscopy is a powerful technique for near real-time (approximately 1 min) assessment of sugar content in various cereal products. © 2012 Institute of Food Technologists®

  19. Close games versus blowouts: Optimal challenge reinforces one's intrinsic motivation to win.

    PubMed

    Meng, Liang; Pei, Guanxiong; Zheng, Jiehui; Ma, Qingguo

    2016-12-01

    When immersed in intrinsically motivating activities, individuals actively seek optimal challenge, which generally brings the most satisfaction as they play hard and finally win. To better simulate real-life scenarios in the controlled laboratory setting, a two-player online StopWatch (SW) game was developed, whose format is similar to that of a badminton tournament. During the game, a male opponent played by a confederate ensured that the same-sex participant paired with him won both matches, one with a wide margin (the lack of challenge condition) and another with a narrow one (the optimal challenge condition). Electrophysiological data were recorded during the entire experiment. An enlarged Stimulus-preceding negativity (SPN) was observed in the optimal challenge condition, indicating a more concentrated anticipatory attention toward the feedback and a stronger intrinsic motivation during close games. Thus, this study provided original neural evidence for predictions of Self-determination theory (SDT) and Flow theory, and confirmed and emphasized the significant role of optimal challenge in promoting one's intrinsic motivation to win. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A test of the optimality approach to modelling canopy properties and CO2 uptake by natural vegetation.

    PubMed

    Schymanski, Stanislaus J; Roderick, Michael L; Sivapalan, Murugesu; Hutley, Lindsay B; Beringer, Jason

    2007-12-01

    Photosynthesis provides plants with their main building material, carbohydrates, and with the energy necessary to thrive and prosper in their environment. We expect, therefore, that natural vegetation would evolve optimally to maximize its net carbon profit (NCP), the difference between carbon acquired by photosynthesis and carbon spent on maintenance of the organs involved in its uptake. We modelled N(CP) for an optimal vegetation for a site in the wet-dry tropics of north Australia based on this hypothesis and on an ecophysiological gas exchange and photosynthesis model, and compared the modelled CO2 fluxes and canopy properties with observations from the site. The comparison gives insights into theoretical and real controls on gas exchange and canopy structure, and supports the optimality approach for the modelling of gas exchange of natural vegetation. The main advantage of the optimality approach we adopt is that no assumptions about the particular vegetation of a site are required, making it a very powerful tool for predicting vegetation response to long-term climate or land use change.

  1. Achieving optimal growth: lessons from simple metabolic modules

    NASA Astrophysics Data System (ADS)

    Goyal, Sidhartha; Chen, Thomas; Wingreen, Ned

    2009-03-01

    Metabolism is a universal property of living organisms. While the metabolic network itself has been well characterized, the logic of its regulation remains largely mysterious. Recent work has shown that growth rates of microorganisms, including the bacterium Escherichia coli, correlate well with optimal growth rates predicted by flux-balance analysis (FBA), a constraint-based computational method. How difficult is it for cells to achieve optimal growth? Our analysis of representative metabolic modules drawn from real metabolism shows that, in all cases, simple feedback inhibition allows nearly optimal growth. Indeed, product-feedback inhibition is found in every biosynthetic pathway and constitutes about 80% of metabolic regulation. However, we find that product-feedback systems designed to approach optimal growth necessarily produce large pool sizes of metabolites, with potentially detrimental effects on cells via toxicity and osmotic imbalance. Interestingly, the sizes of metabolite pools can be strongly restricted if the feedback inhibition is ultrasensitive (i.e. with high Hill coefficient). The need for ultrasensitive mechanisms to limit pool sizes may therefore explain some of the ubiquitous, puzzling complexity found in metabolic feedback regulation at both the transcriptional and post-transcriptional levels.

  2. Recent advances in stellarator optimization

    DOE PAGES

    Gates, D. A.; Boozer, A. H.; Brown, T.; ...

    2017-10-27

    Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. Here, we outline a select set of new concepts for stellarator optimization that, when taken as a group, present a significant step forward in the stellarator concept. One of the criticisms that has been leveled at existing methods of design is the complexity of the resultant field coils. Recently, a new coil optimization code—COILOPT++, which uses a spline instead of a Fourier representation of the coils,—wasmore » written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. The code has been tested by generating coil designs for optimized quasi-axisymmetric stellarator plasma configurations of different aspect ratios. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. New ideas on methods for the optimization of turbulent transport have garnered much attention since these methods have led to design concepts that are calculated to have reduced turbulent heat loss. We have explored possibilities for generating an experimental database to test whether the reduction in transport that is predicted is consistent with experimental observations. Thus, a series of equilibria that can be made in the now latent QUASAR experiment have been identified that will test the predicted transport scalings. Fast particle confinement studies aimed at developing a generalized optimization algorithm are also discussed. A new algorithm developed for the design of the scraper element on W7-X is presented along with ideas for automating the optimization approach.« less

  3. Recent advances in stellarator optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gates, D. A.; Boozer, A. H.; Brown, T.

    Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. Here, we outline a select set of new concepts for stellarator optimization that, when taken as a group, present a significant step forward in the stellarator concept. One of the criticisms that has been leveled at existing methods of design is the complexity of the resultant field coils. Recently, a new coil optimization code—COILOPT++, which uses a spline instead of a Fourier representation of the coils,—wasmore » written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. The code has been tested by generating coil designs for optimized quasi-axisymmetric stellarator plasma configurations of different aspect ratios. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. New ideas on methods for the optimization of turbulent transport have garnered much attention since these methods have led to design concepts that are calculated to have reduced turbulent heat loss. We have explored possibilities for generating an experimental database to test whether the reduction in transport that is predicted is consistent with experimental observations. Thus, a series of equilibria that can be made in the now latent QUASAR experiment have been identified that will test the predicted transport scalings. Fast particle confinement studies aimed at developing a generalized optimization algorithm are also discussed. A new algorithm developed for the design of the scraper element on W7-X is presented along with ideas for automating the optimization approach.« less

  4. A self optimizing synthetic organic reactor system using real-time in-line NMR spectroscopy.

    PubMed

    Sans, Victor; Porwol, Luzian; Dragone, Vincenza; Cronin, Leroy

    2015-02-01

    A configurable platform for synthetic chemistry incorporating an in-line benchtop NMR that is capable of monitoring and controlling organic reactions in real-time is presented. The platform is controlled via a modular LabView software control system for the hardware, NMR, data analysis and feedback optimization. Using this platform we report the real-time advanced structural characterization of reaction mixtures, including 19 F, 13 C, DEPT, 2D NMR spectroscopy (COSY, HSQC and 19 F-COSY) for the first time. Finally, the potential of this technique is demonstrated through the optimization of a catalytic organic reaction in real-time, showing its applicability to self-optimizing systems using criteria such as stereoselectivity, multi-nuclear measurements or 2D correlations.

  5. A study of the application of singular perturbation theory. [development of a real time algorithm for optimal three dimensional aircraft maneuvers

    NASA Technical Reports Server (NTRS)

    Mehra, R. K.; Washburn, R. B.; Sajan, S.; Carroll, J. V.

    1979-01-01

    A hierarchical real time algorithm for optimal three dimensional control of aircraft is described. Systematic methods are developed for real time computation of nonlinear feedback controls by means of singular perturbation theory. The results are applied to a six state, three control variable, point mass model of an F-4 aircraft. Nonlinear feedback laws are presented for computing the optimal control of throttle, bank angle, and angle of attack. Real Time capability is assessed on a TI 9900 microcomputer. The breakdown of the singular perturbation approximation near the terminal point is examined Continuation methods are examined to obtain exact optimal trajectories starting from the singular perturbation solutions.

  6. Muscle coordination is habitual rather than optimal.

    PubMed

    de Rugy, Aymar; Loeb, Gerald E; Carroll, Timothy J

    2012-05-23

    When sharing load among multiple muscles, humans appear to select an optimal pattern of activation that minimizes costs such as the effort or variability of movement. How the nervous system achieves this behavior, however, is unknown. Here we show that contrary to predictions from optimal control theory, habitual muscle activation patterns are surprisingly robust to changes in limb biomechanics. We first developed a method to simulate joint forces in real time from electromyographic recordings of the wrist muscles. When the model was altered to simulate the effects of paralyzing a muscle, the subjects simply increased the recruitment of all muscles to accomplish the task, rather than recruiting only the useful muscles. When the model was altered to make the force output of one muscle unusually noisy, the subjects again persisted in recruiting all muscles rather than eliminating the noisy one. Such habitual coordination patterns were also unaffected by real modifications of biomechanics produced by selectively damaging a muscle without affecting sensory feedback. Subjects naturally use different patterns of muscle contraction to produce the same forces in different pronation-supination postures, but when the simulation was based on a posture different from the actual posture, the recruitment patterns tended to agree with the actual rather than the simulated posture. The results appear inconsistent with computation of motor programs by an optimal controller in the brain. Rather, the brain may learn and recall command programs that result in muscle coordination patterns generated by lower sensorimotor circuitry that are functionally "good-enough."

  7. CO2 Removal from Biogas by Cyanobacterium Leptolyngbya sp. CChF1 Isolated from the Lake Chapala, Mexico: Optimization of the Temperature and Light Intensity.

    PubMed

    Choix, Francisco J; Snell-Castro, Raúl; Arreola-Vargas, Jorge; Carbajal-López, Alberto; Méndez-Acosta, Hugo O

    2017-12-01

    In the present study, the capacity of the cyanobacterium Leptolyngbya sp. CChF1 to remove CO 2 from real and synthetic biogas was evaluated. The identification of the cyanobacterium, isolated from the lake Chapala, was carried out by means of morphological and molecular analyses, while its potential for CO 2 removal from biogas streams was evaluated by kinetic experiments and optimized by a central composite design coupled to a response surface methodology. Results demonstrated that Leptolyngbya sp. CChF1 is able to remove CO 2 and grow indistinctly in real or synthetic biogas streams, showing tolerance to high concentrations of CO 2 and CH 4 , 25 and 75%, respectively. The characterization of the biomass composition at the end of the kinetic assays revealed that the main accumulated by-products under both biogas streams were lipids, followed by proteins and carbohydrates. Regarding the optimization experiments, light intensity and temperature were the studied variables, while synthetic biogas was the carbon source. Results showed that light intensity was significant for CO 2 capture efficiency (p = 0.0290), while temperature was significant for biomass production (p = 0.0024). The predicted CO 2 capture efficiency under optimal conditions (27.1 °C and 920 lx) was 93.48%. Overall, the results of the present study suggest that Leptolyngbya sp. CChF1 is a suitable candidate for biogas upgrading.

  8. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE PAGES

    Luo, Na; Hong, Tianzhen; Li, Hui; ...

    2017-07-25

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  9. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Na; Hong, Tianzhen; Li, Hui

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  10. Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging

    NASA Astrophysics Data System (ADS)

    Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen

    2016-07-01

    Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.

  11. Decoding the Traumatic Memory among Women with PTSD: Implications for Neurocircuitry Models of PTSD and Real-Time fMRI Neurofeedback

    PubMed Central

    Cisler, Josh M.; Bush, Keith; James, G. Andrew; Smitherman, Sonet; Kilts, Clinton D.

    2015-01-01

    Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD. PMID:26241958

  12. Decoding the Traumatic Memory among Women with PTSD: Implications for Neurocircuitry Models of PTSD and Real-Time fMRI Neurofeedback.

    PubMed

    Cisler, Josh M; Bush, Keith; James, G Andrew; Smitherman, Sonet; Kilts, Clinton D

    2015-01-01

    Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD.

  13. Prediction of Tibial Rotation Pathologies Using Particle Swarm Optimization and K-Means Algorithms.

    PubMed

    Sari, Murat; Tuna, Can; Akogul, Serkan

    2018-03-28

    The aim of this article is to investigate pathological subjects from a population through different physical factors. To achieve this, particle swarm optimization (PSO) and K-means (KM) clustering algorithms have been combined (PSO-KM). Datasets provided by the literature were divided into three clusters based on age and weight parameters and each one of right tibial external rotation (RTER), right tibial internal rotation (RTIR), left tibial external rotation (LTER), and left tibial internal rotation (LTIR) values were divided into three types as Type 1, Type 2 and Type 3 (Type 2 is non-pathological (normal) and the other two types are pathological (abnormal)), respectively. The rotation values of every subject in any cluster were noted. Then the algorithm was run and the produced values were also considered. The values of the produced algorithm, the PSO-KM, have been compared with the real values. The hybrid PSO-KM algorithm has been very successful on the optimal clustering of the tibial rotation types through the physical criteria. In this investigation, Type 2 (pathological subjects) is of especially high predictability and the PSO-KM algorithm has been very successful as an operation system for clustering and optimizing the tibial motion data assessments. These research findings are expected to be very useful for health providers, such as physiotherapists, orthopedists, and so on, in which this consequence may help clinicians to appropriately designing proper treatment schedules for patients.

  14. Comparison of simulator fidelity model predictions with in-simulator evaluation data

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.

    1983-01-01

    A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.

  15. Spatially aggregated multiclass pattern classification in functional MRI using optimally selected functional brain areas.

    PubMed

    Zheng, Weili; Ackley, Elena S; Martínez-Ramón, Manel; Posse, Stefan

    2013-02-01

    In previous works, boosting aggregation of classifier outputs from discrete brain areas has been demonstrated to reduce dimensionality and improve the robustness and accuracy of functional magnetic resonance imaging (fMRI) classification. However, dimensionality reduction and classification of mixed activation patterns of multiple classes remain challenging. In the present study, the goals were (a) to reduce dimensionality by combining feature reduction at the voxel level and backward elimination of optimally aggregated classifiers at the region level, (b) to compare region selection for spatially aggregated classification using boosting and partial least squares regression methods and (c) to resolve mixed activation patterns using probabilistic prediction of individual tasks. Brain activation maps from interleaved visual, motor, auditory and cognitive tasks were segmented into 144 functional regions. Feature selection reduced the number of feature voxels by more than 50%, leaving 95 regions. The two aggregation approaches further reduced the number of regions to 30, resulting in more than 75% reduction of classification time and misclassification rates of less than 3%. Boosting and partial least squares (PLS) were compared to select the most discriminative and the most task correlated regions, respectively. Successful task prediction in mixed activation patterns was feasible within the first block of task activation in real-time fMRI experiments. This methodology is suitable for sparsifying activation patterns in real-time fMRI and for neurofeedback from distributed networks of brain activation. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Evaluation of hybrid inverse planning and optimization (HIPO) algorithm for optimization in real-time, high-dose-rate (HDR) brachytherapy for prostate.

    PubMed

    Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley

    2013-07-08

    The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.

  17. Real-Time Classification of Exercise Exertion Levels Using Discriminant Analysis of HRV Data.

    PubMed

    Jeong, In Cheol; Finkelstein, Joseph

    2015-01-01

    Heart rate variability (HRV) was shown to reflect activation of sympathetic nervous system however it is not clear which set of HRV parameters is optimal for real-time classification of exercise exertion levels. There is no studies that compared potential of two types of HRV parameters (time-domain and frequency-domain) in predicting exercise exertion level using discriminant analysis. The main goal of this study was to compare potential of HRV time-domain parameters versus HRV frequency-domain parameters in classifying exercise exertion level. Rest, exercise, and recovery categories were used in classification models. Overall 79.5% classification agreement by the time-domain parameters as compared to overall 52.8% classification agreement by frequency-domain parameters demonstrated that the time-domain parameters had higher potential in classifying exercise exertion levels.

  18. Real-time forecasts of dengue epidemics

    NASA Astrophysics Data System (ADS)

    Yamana, T. K.; Shaman, J. L.

    2015-12-01

    Dengue is a mosquito-borne viral disease prevalent in the tropics and subtropics, with an estimated 2.5 billion people at risk of transmission. In many areas with endemic dengue, disease transmission is seasonal but prone to high inter-annual variability with occasional severe epidemics. Predicting and preparing for periods of higher than average transmission is a significant public health challenge. Here we present a model of dengue transmission and a framework for optimizing model simulations with real-time observational data of dengue cases and environmental variables in order to generate ensemble-based forecasts of the timing and severity of disease outbreaks. The model-inference system is validated using synthetic data and dengue outbreak records. Retrospective forecasts are generated for a number of locations and the accuracy of these forecasts is quantified.

  19. Computer assisted thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Petrie, W.; Mikk, G.

    1977-01-01

    In testing complex systems and components under dynamic thermal-vacuum environments, it is desirable to optimize the environment control sequence in order to reduce test duration and cost. This paper describes an approach where a computer is utilized as part of the test control operation. Real time test data is made available to the computer through time-sharing terminals at appropriate time intervals. A mathematical model of the test article and environmental control equipment is then operated on using the real time data to yield current thermal status, temperature analysis, trend prediction and recommended thermal control setting changes to arrive at the required thermal condition. The data acquisition interface and the time-sharing hook-up to an IBM-370 computer is described along with a typical control program and data demonstrating its use.

  20. Active model-based balancing strategy for self-reconfigurable batteries

    NASA Astrophysics Data System (ADS)

    Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter

    2016-08-01

    This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.

  1. A novel health indicator for on-line lithium-ion batteries remaining useful life prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Yapeng; Huang, Miaohua; Chen, Yupu; Tao, Ye

    2016-07-01

    Prediction of lithium-ion batteries remaining useful life (RUL) plays an important role in an intelligent battery management system. The capacity and internal resistance are often used as the batteries health indicator (HI) for quantifying degradation and predicting RUL. However, on-line measurement of capacity and internal resistance are hardly realizable due to the not fully charged and discharged condition and the extremely expensive cost, respectively. Therefore, there is a great need to find an optional way to deal with this plight. In this work, a novel HI is extracted from the operating parameters of lithium-ion batteries for degradation modeling and RUL prediction. Moreover, Box-Cox transformation is employed to improve HI performance. Then Pearson and Spearman correlation analyses are utilized to evaluate the similarity between real capacity and the estimated capacity derived from the HI. Next, both simple statistical regression technique and optimized relevance vector machine are employed to predict the RUL based on the presented HI. The correlation analyses and prediction results show the efficiency and effectiveness of the proposed HI for battery degradation modeling and RUL prediction.

  2. Operationalizing the Space Weather Modeling Framework: Challenges and Resolutions

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Gombosi, T. I.; Toth, G.; Singer, H. J.; Millward, G. H.; Balch, C. C.; Cash, M. D.

    2016-12-01

    Predicting ground-based magnetic perturbations is a critical step towards specifying and predicting geomagnetically induced currents (GICs) in high voltage transmission lines. Currently, the Space Weather Modeling Framework (SWMF), a flexible modeling framework for simulating the multi-scale space environment, is being transitioned from research to operational use (R2O) by NOAA's Space Weather Prediction Center. Upon completion of this transition, the SWMF will provide localized time-varying magnetic field (dB/dt) predictions using real-time solar wind observations from L1 and the F10.7 proxy for EUV as model input. This presentation chronicles the challenges encountered during the R2O transition of the SWMF. Because operations relies on frequent calculations of global surface dB/dt, new optimizations were required to keep the model running faster than real time. Additionally, several singular situations arose during the 30-day robustness test that required immediate attention. Solutions and strategies for overcoming these issues will be presented. This includes new failsafe options for code execution, new physics and coupling parameters, and the development of an automated validation suite that allows us to monitor performance with code evolution. Finally, the operations-to-research (O2R) impact on SWMF-related research is presented. The lessons learned from this work are valuable and instructive for the space weather community as further R2O progress is made.

  3. Data-driven reinforcement learning–based real-time energy management system for plug-in hybrid electric vehicles

    DOE PAGES

    Qi, Xuewei; Wu, Guoyuan; Boriboonsomsin, Kanok; ...

    2016-01-01

    Plug-in hybrid electric vehicles (PHEVs) show great promise in reducing transportation-related fossil fuel consumption and greenhouse gas emissions. Designing an efficient energy management system (EMS) for PHEVs to achieve better fuel economy has been an active research topic for decades. Most of the advanced systems rely either on a priori knowledge of future driving conditions to achieve the optimal but not real-time solution (e.g., using a dynamic programming strategy) or on only current driving situations to achieve a real-time but nonoptimal solution (e.g., rule-based strategy). This paper proposes a reinforcement learning–based real-time EMS for PHEVs to address the trade-off betweenmore » real-time performance and optimal energy savings. The proposed model can optimize the power-split control in real time while learning the optimal decisions from historical driving cycles. Here, a case study on a real-world commute trip shows that about a 12% fuel saving can be achieved without considering charging opportunities; further, an 8% fuel saving can be achieved when charging opportunities are considered, compared with the standard binary mode control strategy.« less

  4. Data-driven reinforcement learning–based real-time energy management system for plug-in hybrid electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Xuewei; Wu, Guoyuan; Boriboonsomsin, Kanok

    Plug-in hybrid electric vehicles (PHEVs) show great promise in reducing transportation-related fossil fuel consumption and greenhouse gas emissions. Designing an efficient energy management system (EMS) for PHEVs to achieve better fuel economy has been an active research topic for decades. Most of the advanced systems rely either on a priori knowledge of future driving conditions to achieve the optimal but not real-time solution (e.g., using a dynamic programming strategy) or on only current driving situations to achieve a real-time but nonoptimal solution (e.g., rule-based strategy). This paper proposes a reinforcement learning–based real-time EMS for PHEVs to address the trade-off betweenmore » real-time performance and optimal energy savings. The proposed model can optimize the power-split control in real time while learning the optimal decisions from historical driving cycles. Here, a case study on a real-world commute trip shows that about a 12% fuel saving can be achieved without considering charging opportunities; further, an 8% fuel saving can be achieved when charging opportunities are considered, compared with the standard binary mode control strategy.« less

  5. Two- and three-dimensional transvaginal ultrasound with power Doppler angiography and gel infusion sonography for diagnosis of endometrial malignancy.

    PubMed

    Dueholm, M; Christensen, J W; Rydbjerg, S; Hansen, E S; Ørtoft, G

    2015-06-01

    To evaluate the diagnostic efficiency of two-dimensional (2D) and three-dimensional (3D) transvaginal ultrasonography, power Doppler angiography (PDA) and gel infusion sonography (GIS) at offline analysis for recognition of malignant endometrium compared with real-time evaluation during scanning, and to determine optimal image parameters at 3D analysis. One hundred and sixty-nine consecutive women with postmenopausal bleeding and endometrial thickness ≥ 5 mm underwent systematic evaluation of endometrial pattern on 2D imaging, and 2D videoclips and 3D volumes were later analyzed offline. Histopathological findings at hysteroscopy or hysterectomy were used as the reference standard. The efficiency of the different techniques for diagnosis of malignancy was calculated and compared. 3D image parameters, endometrial volume and 3D vascular indices were assessed. Optimal 3D image parameters were transformed by logistic regression into a risk of endometrial cancer (REC) score, including scores for body mass index, endometrial thickness and endometrial morphology at gray-scale and PDA and GIS. Offline 2D and 3D analysis were equivalent, but had lower diagnostic performance compared with real-time evaluation during scanning. Their diagnostic performance was not markedly improved by the addition of PDA or GIS, but their efficiency was comparable with that of real-time 2D-GIS in offline examinations of good image quality. On logistic regression, the 3D parameters from the REC-score system had the highest diagnostic efficiency. The area under the curve of the REC-score system at 3D-GIS (0.89) was not improved by inclusion of vascular indices or endometrial volume calculations. Real-time evaluation during scanning is most efficient, but offline 2D and 3D analysis is useful for prediction of endometrial cancer when good image quality can be obtained. The diagnostic efficiency at 3D analysis may be improved by use of REC-scoring systems, without the need for calculation of vascular indices or endometrial volume. The optimal imaging modality appears to be real-time 2D-GIS. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.

  6. Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers

    PubMed Central

    Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.

    2016-01-01

    The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362

  7. Signal processing using sparse derivatives with applications to chromatograms and ECG

    NASA Astrophysics Data System (ADS)

    Ning, Xiaoran

    In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse respectively. At the end, the algorithm is applied to a QRS detection system and validated using the MIT-BIH Arrhythmia database (109452 anotations), resulting a sensitivity of Se = 99.87%$ and a positive prediction of +P = 99.88%.

  8. Atmospheric model development in support of SEASAT. Volume 1: Summary of findings

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.

    1977-01-01

    Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.

  9. Self-Tuning of Design Variables for Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Lin, Chaung; Juang, Jer-Nan

    2000-01-01

    Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.

  10. Optimization of Stripping Voltammetric Sensor by a Back Propagation Artificial Neural Network for the Accurate Determination of Pb(II) in the Presence of Cd(II).

    PubMed

    Zhao, Guo; Wang, Hui; Liu, Gang; Wang, Zhiqiang

    2016-09-21

    An easy, but effective, method has been proposed to detect and quantify the Pb(II) in the presence of Cd(II) based on a Bi/glassy carbon electrode (Bi/GCE) with the combination of a back propagation artificial neural network (BP-ANN) and square wave anodic stripping voltammetry (SWASV) without further electrode modification. The effects of Cd(II) in different concentrations on stripping responses of Pb(II) was studied. The results indicate that the presence of Cd(II) will reduce the prediction precision of a direct calibration model. Therefore, a two-input and one-output BP-ANN was built for the optimization of a stripping voltammetric sensor, which considering the combined effects of Cd(II) and Pb(II) on the SWASV detection of Pb(II) and establishing the nonlinear relationship between the stripping peak currents of Pb(II) and Cd(II) and the concentration of Pb(II). The key parameters of the BP-ANN and the factors affecting the SWASV detection of Pb(II) were optimized. The prediction performance of direct calibration model and BP-ANN model were tested with regard to the mean absolute error (MAE), root mean square error (RMSE), average relative error (ARE), and correlation coefficient. The results proved that the BP-ANN model exhibited higher prediction accuracy than the direct calibration model. Finally, a real samples analysis was performed to determine trace Pb(II) in some soil specimens with satisfactory results.

  11. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    PubMed

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  12. Simulation-Based Approach for Site-Specific Optimization of Hydrokinetic Turbine Arrays

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Chawdhary, S.; Yang, X.; Khosronejad, A.; Angelidis, D.

    2014-12-01

    A simulation-based approach has been developed to enable site-specific optimization of tidal and current turbine arrays in real-life waterways. The computational code is based on the St. Anthony Falls Laboratory Virtual StreamLab (VSL3D), which is able to carry out high-fidelity simulations of turbulent flow and sediment transport processes in rivers and streams taking into account the arbitrary geometrical complexity characterizing natural waterways. The computational framework can be used either in turbine-resolving mode, to take into account all geometrical details of the turbine, or with the turbines parameterized as actuator disks or actuator lines. Locally refined grids are employed to dramatically increase the resolution of the simulation and enable efficient simulations of multi-turbine arrays. Turbine/sediment interactions are simulated using the coupled hydro-morphodynamic module of VSL3D. The predictive capabilities of the resulting computational framework will be demonstrated by applying it to simulate turbulent flow past a tri-frame configuration of hydrokinetic turbines in a rigid-bed turbulent open channel flow as well as turbines mounted on mobile bed open channels to investigate turbine/sediment interactions. The utility of the simulation-based approach for guiding the optimal development of turbine arrays in real-life waterways will also be discussed and demonstrated. This work was supported by NSF grant IIP-1318201. Simulations were carried out at the Minnesota Supercomputing Institute.

  13. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution

    PubMed Central

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M.; Bai, Ruibin

    2016-01-01

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition. PMID:27854324

  14. Refining Mild-to-Moderate Alzheimer Disease Screening: A Tool for Clinicians.

    PubMed

    Del Campo, Natalia; Cesari, Matteo; Canevelli, Marco; Hoogendijk, Emiel O; Lilamand, Matthieu; Kelaiditi, Eirini; Soto, Maria E; Ousset, Pierre-Jean; Weiner, Michael W; Andrieu, Sandrine; Vellas, Bruno

    2016-10-01

    Recent evidence suggests that a substantial minority of people clinically diagnosed with probable Alzheimer disease (AD) in fact do not fulfill the neuropathological criteria for the disease. A clinical hallmark of these phenocopies of AD is that these individuals tend to remain cognitively stable for extended periods of time, in contrast to their peers with confirmed AD who show a progressive decline. We aimed to examine the prevalence of patients clinically diagnosed with mild-to-moderate AD who do not experience the expected clinically significant cognitive decline and identify markers easily available in routine medical practice predictive of a stable cognitive prognosis in this population. Data were obtained from two independent, longitudinal, observational multicenter studies in patients with mild-to-moderate AD. The two studies were the European "Impact of Cholinergic Treatment Use" (ICTUS) and the French "REseau sur la maladie d'Alzheimer FRançais" (REAL.FR). We used prospective data of 756 patients enrolled in ICTUS and 340 enrolled in REAL.FR. A prediction rule of cognitive decline was derived on ICTUS using classification and regression tree analysis and then cross-validated on REAL.FR. A range of demographic, clinical and cognitive variables were tested as predictor variables. Overall, 27.9% of patients in ICTUS and 20.9% in REAL.FR did not decline over 2 years. We identified optimized cut-points on the verbal memory items of the Alzheimer Disease Assessment Scale-Cognitive Subscale capable of classifying patients at baseline into those who went on to decline and those who remained stable or improved over the duration of the trial. The application of this simple rule would allow the identification of dementia cases where a more detailed differential diagnostic examination (eg, with biomarkers) is warranted. These findings are promising toward the refinement of AD screening in the clinic. For a further optimization of our classification rule, we encourage others to use our methodological approach on other episodic memory assessment tools designed to detect even small cognitive changes in patients with AD. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  15. Relative Binding Free Energy Calculations in Drug Discovery: Recent Advances and Practical Considerations.

    PubMed

    Cournia, Zoe; Allen, Bryce; Sherman, Woody

    2017-12-26

    Accurate in silico prediction of protein-ligand binding affinities has been a primary objective of structure-based drug design for decades due to the putative value it would bring to the drug discovery process. However, computational methods have historically failed to deliver value in real-world drug discovery applications due to a variety of scientific, technical, and practical challenges. Recently, a family of approaches commonly referred to as relative binding free energy (RBFE) calculations, which rely on physics-based molecular simulations and statistical mechanics, have shown promise in reliably generating accurate predictions in the context of drug discovery projects. This advance arises from accumulating developments in the underlying scientific methods (decades of research on force fields and sampling algorithms) coupled with vast increases in computational resources (graphics processing units and cloud infrastructures). Mounting evidence from retrospective validation studies, blind challenge predictions, and prospective applications suggests that RBFE simulations can now predict the affinity differences for congeneric ligands with sufficient accuracy and throughput to deliver considerable value in hit-to-lead and lead optimization efforts. Here, we present an overview of current RBFE implementations, highlighting recent advances and remaining challenges, along with examples that emphasize practical considerations for obtaining reliable RBFE results. We focus specifically on relative binding free energies because the calculations are less computationally intensive than absolute binding free energy (ABFE) calculations and map directly onto the hit-to-lead and lead optimization processes, where the prediction of relative binding energies between a reference molecule and new ideas (virtual molecules) can be used to prioritize molecules for synthesis. We describe the critical aspects of running RBFE calculations, from both theoretical and applied perspectives, using a combination of retrospective literature examples and prospective studies from drug discovery projects. This work is intended to provide a contemporary overview of the scientific, technical, and practical issues associated with running relative binding free energy simulations, with a focus on real-world drug discovery applications. We offer guidelines for improving the accuracy of RBFE simulations, especially for challenging cases, and emphasize unresolved issues that could be improved by further research in the field.

  16. Multiple model analysis with discriminatory data collection (MMA-DDC): A new method for improving measurement selection

    NASA Astrophysics Data System (ADS)

    Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.

    2011-12-01

    Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.

  17. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    PubMed

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  18. Data mining for water resource management part 2 - methods and approaches to solving contemporary problems

    USGS Publications Warehouse

    Roehl, Edwin A.; Conrads, Paul

    2010-01-01

    This is the second of two papers that describe how data mining can aid natural-resource managers with the difficult problem of controlling the interactions between hydrologic and man-made systems. Data mining is a new science that assists scientists in converting large databases into knowledge, and is uniquely able to leverage the large amounts of real-time, multivariate data now being collected for hydrologic systems. Part 1 gives a high-level overview of data mining, and describes several applications that have addressed major water resource issues in South Carolina. This Part 2 paper describes how various data mining methods are integrated to produce predictive models for controlling surface- and groundwater hydraulics and quality. The methods include: - signal processing to remove noise and decompose complex signals into simpler components; - time series clustering that optimally groups hundreds of signals into "classes" that behave similarly for data reduction and (or) divide-and-conquer problem solving; - classification which optimally matches new data to behavioral classes; - artificial neural networks which optimally fit multivariate data to create predictive models; - model response surface visualization that greatly aids in understanding data and physical processes; and, - decision support systems that integrate data, models, and graphics into a single package that is easy to use.

  19. A self optimizing synthetic organic reactor system using real-time in-line NMR spectroscopy† †Electronic supplementary information (ESI) available: Details about the methodology, LabView scripts, experimental set-ups, additional spectra and self-optimization can be found in the SI. See DOI: 10.1039/c4sc03075c Click here for additional data file.

    PubMed Central

    Sans, Victor; Porwol, Luzian; Dragone, Vincenza

    2015-01-01

    A configurable platform for synthetic chemistry incorporating an in-line benchtop NMR that is capable of monitoring and controlling organic reactions in real-time is presented. The platform is controlled via a modular LabView software control system for the hardware, NMR, data analysis and feedback optimization. Using this platform we report the real-time advanced structural characterization of reaction mixtures, including 19F, 13C, DEPT, 2D NMR spectroscopy (COSY, HSQC and 19F-COSY) for the first time. Finally, the potential of this technique is demonstrated through the optimization of a catalytic organic reaction in real-time, showing its applicability to self-optimizing systems using criteria such as stereoselectivity, multi-nuclear measurements or 2D correlations. PMID:29560211

  20. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  1. BEM-based simulation of lung respiratory deformation for CT-guided biopsy.

    PubMed

    Chen, Dong; Chen, Weisheng; Huang, Lipeng; Feng, Xuegang; Peters, Terry; Gu, Lixu

    2017-09-01

    Accurate and real-time prediction of the lung and lung tumor deformation during respiration are important considerations when performing a peripheral biopsy procedure. However, most existing work focused on offline whole lung simulation using 4D image data, which is not applicable in real-time image-guided biopsy with limited image resources. In this paper, we propose a patient-specific biomechanical model based on the boundary element method (BEM) computed from CT images to estimate the respiration motion of local target lesion region, vessel tree and lung surface for the real-time biopsy guidance. This approach applies pre-computation of various BEM parameters to facilitate the requirement for real-time lung motion simulation. The resulting boundary condition at end inspiratory phase is obtained using a nonparametric discrete registration with convex optimization, and the simulation of the internal tissue is achieved by applying a tetrahedron-based interpolation method depend on expert-determined feature points on the vessel tree model. A reference needle is tracked to update the simulated lung motion during biopsy guidance. We evaluate the model by applying it for respiratory motion estimations of ten patients. The average symmetric surface distance (ASSD) and the mean target registration error (TRE) are employed to evaluate the proposed model. Results reveal that it is possible to predict the lung motion with ASSD of [Formula: see text] mm and a mean TRE of [Formula: see text] mm at largest over the entire respiratory cycle. In the CT-/electromagnetic-guided biopsy experiment, the whole process was assisted by our BEM model and final puncture errors in two studies were 3.1 and 2.0 mm, respectively. The experiment results reveal that both the accuracy of simulation and real-time performance meet the demands of clinical biopsy guidance.

  2. Ant colony optimization algorithm for interpretable Bayesian classifiers combination: application to medical predictions.

    PubMed

    Bouktif, Salah; Hanna, Eileen Marie; Zaki, Nazar; Abu Khousa, Eman

    2014-01-01

    Prediction and classification techniques have been well studied by machine learning researchers and developed for several real-word problems. However, the level of acceptance and success of prediction models are still below expectation due to some difficulties such as the low performance of prediction models when they are applied in different environments. Such a problem has been addressed by many researchers, mainly from the machine learning community. A second problem, principally raised by model users in different communities, such as managers, economists, engineers, biologists, and medical practitioners, etc., is the prediction models' interpretability. The latter is the ability of a model to explain its predictions and exhibit the causality relationships between the inputs and the outputs. In the case of classification, a successful way to alleviate the low performance is to use ensemble classiers. It is an intuitive strategy to activate collaboration between different classifiers towards a better performance than individual classier. Unfortunately, ensemble classifiers method do not take into account the interpretability of the final classification outcome. It even worsens the original interpretability of the individual classifiers. In this paper we propose a novel implementation of classifiers combination approach that does not only promote the overall performance but also preserves the interpretability of the resulting model. We propose a solution based on Ant Colony Optimization and tailored for the case of Bayesian classifiers. We validate our proposed solution with case studies from medical domain namely, heart disease and Cardiotography-based predictions, problems where interpretability is critical to make appropriate clinical decisions. The datasets, Prediction Models and software tool together with supplementary materials are available at http://faculty.uaeu.ac.ae/salahb/ACO4BC.htm.

  3. Specification and Prediction of the Radiation Environment Using Data Assimilative VERB code

    NASA Astrophysics Data System (ADS)

    Shprits, Yuri; Kellerman, Adam

    2016-07-01

    We discuss how data assimilation can be used for the reconstruction of long-term evolution, bench-marking of the physics based codes and used to improve the now-casting and focusing of the radiation belts and ring current. We also discuss advanced data assimilation methods such as parameter estimation and smoothing. We present a number of data assimilation applications using the VERB 3D code. The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. 1) Model with data assimilation allows us to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics-based VERB code in an optimal way. We illustrate how to use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore, the model is as good as the initial conditions that it uses. To produce the best possible initial conditions, data from different sources (GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation, as described above. The resulting initial conditions do not have gaps. This allows us to make more accurate predictions. Real-time prediction framework operating on our website, based on GOES, RBSP A, B and ACE data, and 3D VERB, is presented and discussed.

  4. Toward the Real-Time Tsunami Parameters Prediction

    NASA Astrophysics Data System (ADS)

    Lavrentyev, Mikhail; Romanenko, Alexey; Marchuk, Andrey

    2013-04-01

    Today, a wide well-developed system of deep ocean tsunami detectors operates over the Pacific. Direct measurements of tsunami-wave time series are available. However, tsunami-warning systems fail to predict basic parameters of tsunami waves on time. Dozens examples could be provided. In our view, the lack of computational power is the main reason of these failures. At the same time, modern computer technologies such as, GPU (graphic processing unit) and FPGA (field programmable gates array), can dramatically improve data processing performance, which may enhance timely tsunami-warning prediction. Thus, it is possible to address the challenge of real-time tsunami forecasting for selected geo regions. We propose to use three new techniques in the existing tsunami warning systems to achieve real-time calculation of tsunami wave parameters. First of all, measurement system (DART buoys location, e.g.) should be optimized (both in terms of wave arriving time and amplitude parameter). The corresponding software application exists today and is ready for use [1]. We consider the example of the coastal line of Japan. Numerical tests show that optimal installation of only 4 DART buoys (accounting the existing sea bed cable) will reduce the tsunami wave detection time to only 10 min after an underwater earthquake. Secondly, as was shown by this paper authors, the use of GPU/FPGA technologies accelerates the execution of the MOST (method of splitting tsunami) code by 100 times [2]. Therefore, tsunami wave propagation over the ocean area 2000*2000 km (wave propagation simulation: time step 10 sec, recording each 4th spatial point and 4th time step) could be calculated at: 3 sec with 4' mesh 50 sec with 1' mesh 5 min with 0.5' mesh The algorithm to switch from coarse mesh to the fine grain one is also available. Finally, we propose the new algorithm for tsunami source parameters determination by real-time processing the time series, obtained at DART. It is possible to approximate the measured time series by a linear combination of synthetic marigrams. Coefficients of such linear combination are calculated with the help of orthogonal decomposition. The algorithm is very fast and demonstrates good accuracy. Summing up, using the example of the coastal line of Japan, wave height evaluation will be available in 12-14 minutes after the earthquake even before the wave approaches the nearest shore point (usually, it takes places in about 20 minutes). The determination of the optimal sensors' location using genetic algorithm / A.S.Astrakova, D.V.Bannikov, S.G.Cherny, M.M.Lavrentiev // 3rd Nordic EMW Summer School, Turku, Finland, June, 2009: proceedings - Finland: TUSC General Publications, 2009. - N 53. - P.5-22. M.Lavrentiev Jr., A.Romanenko, "Modern Hardware Solutions to Speed Up Tsunami Simulation Codes", Geophysical research abstracts, Vol. 12, EGU2010-3835, 2010

  5. Improved Short-Term Clock Prediction Method for Real-Time Positioning.

    PubMed

    Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan

    2017-06-06

    The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.

  6. Evaluating and Optimizing Online Advertising: Forget the Click, but There Are Good Proxies.

    PubMed

    Dalessandro, Brian; Hook, Rod; Perlich, Claudia; Provost, Foster

    2015-06-01

    Online systems promise to improve advertisement targeting via the massive and detailed data available. However, there often is too few data on exactly the outcome of interest, such as purchases, for accurate campaign evaluation and optimization (due to low conversion rates, cold start periods, lack of instrumentation of offline purchases, and long purchase cycles). This paper presents a detailed treatment of proxy modeling, which is based on the identification of a suitable alternative (proxy) target variable when data on the true objective is in short supply (or even completely nonexistent). The paper has a two-fold contribution. First, the potential of proxy modeling is demonstrated clearly, based on a massive-scale experiment across 58 real online advertising campaigns. Second, we assess the value of different specific proxies for evaluating and optimizing online display advertising, showing striking results. The results include bad news and good news. The most commonly cited and used proxy is a click on an ad. The bad news is that across a large number of campaigns, clicks are not good proxies for evaluation or for optimization: clickers do not resemble buyers. The good news is that an alternative sort of proxy performs remarkably well: observed visits to the brand's website. Specifically, predictive models built based on brand site visits-which are much more common than purchases-do a remarkably good job of predicting which browsers will make a purchase. The practical bottom line: evaluating and optimizing campaigns using clicks seems wrongheaded; however, there is an easy and attractive alternative-use a well-chosen site-visit proxy instead.

  7. Design of online monitoring and forecasting system for electrical equipment temperature of prefabricated substation based on WSN

    NASA Astrophysics Data System (ADS)

    Qi, Weiran; Miao, Hongxia; Miao, Xuejiao; Xiao, Xuanxuan; Yan, Kuo

    2016-10-01

    In order to ensure the safe and stable operation of the prefabricated substations, temperature sensing subsystem, temperature remote monitoring and management subsystem, forecast subsystem are designed in the paper. Wireless temperature sensing subsystem which consists of temperature sensor and MCU sends the electrical equipment temperature to the remote monitoring center by wireless sensor network. Remote monitoring center can realize the remote monitoring and prediction by monitoring and management subsystem and forecast subsystem. Real-time monitoring of power equipment temperature, history inquiry database, user management, password settings, etc., were achieved by monitoring and management subsystem. In temperature forecast subsystem, firstly, the chaos of the temperature data was verified and phase space is reconstructed. Then Support Vector Machine - Particle Swarm Optimization (SVM-PSO) was used to predict the temperature of the power equipment in prefabricated substations. The simulation results found that compared with the traditional methods SVM-PSO has higher prediction accuracy.

  8. Modeling a multivariable reactor and on-line model predictive control.

    PubMed

    Yu, D W; Yu, D L

    2005-10-01

    A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.

  9. Energy landscapes for a machine-learning prediction of patient discharge

    NASA Astrophysics Data System (ADS)

    Das, Ritankar; Wales, David J.

    2016-06-01

    The energy landscapes framework is applied to a configuration space generated by training the parameters of a neural network. In this study the input data consists of time series for a collection of vital signs monitored for hospital patients, and the outcomes are patient discharge or continued hospitalisation. Using machine learning as a predictive diagnostic tool to identify patterns in large quantities of electronic health record data in real time is a very attractive approach for supporting clinical decisions, which have the potential to improve patient outcomes and reduce waiting times for discharge. Here we report some preliminary analysis to show how machine learning might be applied. In particular, we visualize the fitting landscape in terms of locally optimal neural networks and the connections between them in parameter space. We anticipate that these results, and analogues of thermodynamic properties for molecular systems, may help in the future design of improved predictive tools.

  10. Scaling of Perceptual Errors Can Predict the Shape of Neural Tuning Curves

    NASA Astrophysics Data System (ADS)

    Shouval, Harel Z.; Agarwal, Animesh; Gavornik, Jeffrey P.

    2013-04-01

    Weber’s law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber’s law remains unknown. This work presents a simple theory explaining the conditions under which Weber’s law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber’s law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber’s law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber’s law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber’s law and may represent a general governing principle relating perception to neural activity.

  11. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  12. The value of surrogate endpoints for predicting real-world survival across five cancer types.

    PubMed

    Shafrin, Jason; Brookmeyer, Ron; Peneva, Desi; Park, Jinhee; Zhang, Jie; Figlin, Robert A; Lakdawalla, Darius N

    2016-01-01

    It is unclear how well different outcome measures in randomized controlled trials (RCTs) perform in predicting real-world cancer survival. We assess the ability of RCT overall survival (OS) and surrogate endpoints - progression-free survival (PFS) and time to progression (TTP) - to predict real-world OS across five cancers. We identified 20 treatments and 31 indications for breast, colorectal, lung, ovarian, and pancreatic cancer that had a phase III RCT reporting median OS and median PFS or TTP. Median real-world OS was determined using a Kaplan-Meier estimator applied to patients in the Surveillance and Epidemiology End Results (SEER)-Medicare database (1991-2010). Performance of RCT OS and PFS/TTP in predicting real-world OS was measured using t-tests, median absolute prediction error, and R(2) from linear regressions. Among 72,600 SEER-Medicare patients similar to RCT participants, median survival was 5.9 months for trial surrogates, 14.1 months for trial OS, and 13.4 months for real-world OS. For this sample, regression models using clinical trial OS and trial surrogates as independent variables predicted real-world OS significantly better than models using surrogates alone (P = 0.026). Among all real-world patients using sample treatments (N = 309,182), however, adding trial OS did not improve predictive power over predictions based on surrogates alone (P = 0.194). Results were qualitatively similar using median absolute prediction error and R(2) metrics. Among the five tumor types investigated, trial OS and surrogates were each independently valuable in predicting real-world OS outcomes for patients similar to trial participants. In broader real-world populations, however, trial OS added little incremental value over surrogates alone.

  13. Characterization of electroencephalography signals for estimating saliency features in videos.

    PubMed

    Liang, Zhen; Hamada, Yasuyuki; Oba, Shigeyuki; Ishii, Shin

    2018-05-12

    Understanding the functions of the visual system has been one of the major targets in neuroscience formany years. However, the relation between spontaneous brain activities and visual saliency in natural stimuli has yet to be elucidated. In this study, we developed an optimized machine learning-based decoding model to explore the possible relationships between the electroencephalography (EEG) characteristics and visual saliency. The optimal features were extracted from the EEG signals and saliency map which was computed according to an unsupervised saliency model ( Tavakoli and Laaksonen, 2017). Subsequently, various unsupervised feature selection/extraction techniques were examined using different supervised regression models. The robustness of the presented model was fully verified by means of ten-fold or nested cross validation procedure, and promising results were achieved in the reconstruction of saliency features based on the selected EEG characteristics. Through the successful demonstration of using EEG characteristics to predict the real-time saliency distribution in natural videos, we suggest the feasibility of quantifying visual content through measuring brain activities (EEG signals) in real environments, which would facilitate the understanding of cortical involvement in the processing of natural visual stimuli and application developments motivated by human visual processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Real-time monitoring of process parameters in rice wine fermentation by a portable spectral analytical system combined with multivariate analysis.

    PubMed

    Ouyang, Qin; Zhao, Jiewen; Pan, Wenxiu; Chen, Quansheng

    2016-01-01

    A portable and low-cost spectral analytical system was developed and used to monitor real-time process parameters, i.e. total sugar content (TSC), alcohol content (AC) and pH during rice wine fermentation. Various partial least square (PLS) algorithms were implemented to construct models. The performance of a model was evaluated by the correlation coefficient (Rp) and the root mean square error (RMSEP) in the prediction set. Among the models used, the synergy interval PLS (Si-PLS) was found to be superior. The optimal performance by the Si-PLS model for the TSC was Rp = 0.8694, RMSEP = 0.438; the AC was Rp = 0.8097, RMSEP = 0.617; and the pH was Rp = 0.9039, RMSEP = 0.0805. The stability and reliability of the system, as well as the optimal models, were verified using coefficients of variation, most of which were found to be less than 5%. The results suggest this portable system is a promising tool that could be used as an alternative method for rapid monitoring of process parameters during rice wine fermentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    PubMed Central

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  16. Optimal prediction of the number of unseen species

    PubMed Central

    Orlitsky, Alon; Suresh, Ananda Theertha; Wu, Yihong

    2016-01-01

    Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42−58], uses n samples to predict the number U of hitherto unseen species that would be observed if t⋅n new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45−63] constructed an intriguing estimator that predicts U for all t≤1. Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435−447] proposed a modification that empirically predicts U even for some t>1, but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to t∝log⁡n. We also show that this range is the best possible and that the estimator’s mean-square error is near optimal for any t. Our approach yields a provable guarantee for the Efron−Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product. PMID:27830649

  17. Optimal prediction of the number of unseen species.

    PubMed

    Orlitsky, Alon; Suresh, Ananda Theertha; Wu, Yihong

    2016-11-22

    Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42-58], uses n samples to predict the number U of hitherto unseen species that would be observed if [Formula: see text] new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45-63] constructed an intriguing estimator that predicts U for all [Formula: see text] Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435-447] proposed a modification that empirically predicts U even for some [Formula: see text], but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to [Formula: see text] We also show that this range is the best possible and that the estimator's mean-square error is near optimal for any t Our approach yields a provable guarantee for the Efron-Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product.

  18. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  19. Alexander Hegedus Lightning Talk: Integrating Measurements to Optimize Space Weather Strategies

    NASA Astrophysics Data System (ADS)

    Hegedus, A. M.

    2017-12-01

    Alexander Hegedus is a PhD Candidate at the University of Michigan, and won an Outstanding Student Paper Award at the AGU 2016 Fall Meeting for his poster "Simulating 3D Spacecraft Constellations for Low Frequency Radio Imaging." In this short talk, Alex outlines his current research of analyzing data from both real and simulated instruments to answer Heliophysical questions. He then sketches out future plans to simulate science pipelines in a real-time data assimilation model that uses a Bayesian framework to integrate information from different instruments to determine the efficacy of future Space Weather Alert systems. MHD simulations made with Michigan's own Space Weather Model Framework will provide input to simulated instruments, acting as an Observing System Simulation Experiment to verify that a certain set of measurements can accurately predict different classes of Space Weather events.

  20. Real-Time Optimization for use in a Control Allocation System to Recover from Pilot Induced Oscillations

    NASA Technical Reports Server (NTRS)

    Leonard, Michael W.

    2013-01-01

    Integration of the Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) System into the control system of a Short Takeoff and Landing Mobility Concept Vehicle simulation presents a challenge because the CAPIO formulation requires that constrained optimization problems be solved at the controller operating frequency. We present a solution that utilizes a modified version of the well-known L-BFGS-B solver. Despite the iterative nature of the solver, the method is seen to converge in real time with sufficient reliability to support three weeks of piloted runs at the NASA Ames Vertical Motion Simulator (VMS) facility. The results of the optimization are seen to be excellent in the vast majority of real-time frames. Deficiencies in the quality of the results in some frames are shown to be improvable with simple termination criteria adjustments, though more real-time optimization iterations would be required.

  1. Conditional power and predictive power based on right censored data with supplementary auxiliary information.

    PubMed

    Sun, Libo; Wan, Ying

    2018-04-22

    Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Unexpected but Incidental Positive Outcomes Predict Real-World Gambling.

    PubMed

    Otto, A Ross; Fleming, Stephen M; Glimcher, Paul W

    2016-03-01

    Positive mood can affect a person's tendency to gamble, possibly because positive mood fosters unrealistic optimism. At the same time, unexpected positive outcomes, often called prediction errors, influence mood. However, a linkage between positive prediction errors-the difference between expected and obtained outcomes-and consequent risk taking has yet to be demonstrated. Using a large data set of New York City lottery gambling and a model inspired by computational accounts of reward learning, we found that people gamble more when incidental outcomes in the environment (e.g., local sporting events and sunshine) are better than expected. When local sports teams performed better than expected, or a sunny day followed a streak of cloudy days, residents gambled more. The observed relationship between prediction errors and gambling was ubiquitous across the city's socioeconomically diverse neighborhoods and was specific to sports and weather events occurring locally in New York City. Our results suggest that unexpected but incidental positive outcomes influence risk taking. © The Author(s) 2016.

  3. Quantitative Structure – Property Relationship Modeling of Remote Liposome Loading Of Drugs

    PubMed Central

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-01-01

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a dataset including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and five-fold external validation. The external prediction accuracy for binary models was as high as 91–96%; for continuous models the mean coefficient R2 for regression between predicted versus observed values was 0.76–0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. PMID:22154932

  4. Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.

    NASA Astrophysics Data System (ADS)

    Hawary, A. F.; Razak, N. A.

    2018-05-01

    Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.

  5. Robust model predictive control for optimal continuous drug administration.

    PubMed

    Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos

    2014-10-01

    In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Fugitive emission source characterization using a gradient-based optimization scheme and scalar transport adjoint

    NASA Astrophysics Data System (ADS)

    Brereton, Carol A.; Joynes, Ian M.; Campbell, Lucy J.; Johnson, Matthew R.

    2018-05-01

    Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.

  7. Comparison of Ultra-Rapid Orbit Prediction Strategies for GPS, GLONASS, Galileo and BeiDou.

    PubMed

    Geng, Tao; Zhang, Peng; Wang, Wei; Xie, Xin

    2018-02-06

    Currently, ultra-rapid orbits play an important role in the high-speed development of global navigation satellite system (GNSS) real-time applications. This contribution focuses on the impact of the fitting arc length of observed orbits and solar radiation pressure (SRP) on the orbit prediction performance for GPS, GLONASS, Galileo and BeiDou. One full year's precise ephemerides during 2015 were used as fitted observed orbits and then as references to be compared with predicted orbits, together with known earth rotation parameters. The full nine-parameter Empirical Center for Orbit Determination in Europe (CODE) Orbit Model (ECOM) and its reduced version were chosen in our study. The arc lengths of observed fitted orbits that showed the smallest weighted root mean squares (WRMSs) and medians of the orbit differences after a Helmert transformation fell between 40 and 45 h for GPS and GLONASS and between 42 and 48 h for Galileo, while the WRMS values and medians become flat after a 42 h arc length for BeiDou. The stability of the Helmert transformation and SRP parameters also confirmed the similar optimal arc lengths. The range around 42-45 h is suggested to be the optimal arc length interval of the fitted observed orbits for the multi-GNSS joint solution of ultra-rapid orbits.

  8. Comparison of Ultra-Rapid Orbit Prediction Strategies for GPS, GLONASS, Galileo and BeiDou

    PubMed Central

    Zhang, Peng; Wang, Wei; Xie, Xin

    2018-01-01

    Currently, ultra-rapid orbits play an important role in the high-speed development of global navigation satellite system (GNSS) real-time applications. This contribution focuses on the impact of the fitting arc length of observed orbits and solar radiation pressure (SRP) on the orbit prediction performance for GPS, GLONASS, Galileo and BeiDou. One full year’s precise ephemerides during 2015 were used as fitted observed orbits and then as references to be compared with predicted orbits, together with known earth rotation parameters. The full nine-parameter Empirical Center for Orbit Determination in Europe (CODE) Orbit Model (ECOM) and its reduced version were chosen in our study. The arc lengths of observed fitted orbits that showed the smallest weighted root mean squares (WRMSs) and medians of the orbit differences after a Helmert transformation fell between 40 and 45 h for GPS and GLONASS and between 42 and 48 h for Galileo, while the WRMS values and medians become flat after a 42 h arc length for BeiDou. The stability of the Helmert transformation and SRP parameters also confirmed the similar optimal arc lengths. The range around 42–45 h is suggested to be the optimal arc length interval of the fitted observed orbits for the multi-GNSS joint solution of ultra-rapid orbits. PMID:29415467

  9. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  10. Sequence Optimized Real-Time RT-PCR Assay for Detection of Crimean-Congo Hemorrhagic Fever Virus

    DTIC Science & Technology

    2017-03-21

    19-23]. Real-56 time reverse-transcription PCR remains the gold standard for quantitative , sensitive, and specific 57 detection of CCHFV; however...five-fold in two different series , and samples were run by real- time RT-PCR 116 in triplicate. The preliminary LOD was the lowest RNA dilution where...1 Sequence optimized real- time RT-PCR assay for detection of Crimean-Congo hemorrhagic fever 1 virus 2 3 JW Koehler1, KL Delp1, AT Hall1, SP

  11. Rethinking key–value store for parallel I/O optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He

    2015-01-26

    Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less

  12. Automatic discovery of optimal classes

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew

    1986-01-01

    A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.

  13. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    PubMed

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  14. On-Board Real-Time Optimization Control for Turbo-Fan Engine Life Extending

    NASA Astrophysics Data System (ADS)

    Zheng, Qiangang; Zhang, Haibo; Miao, Lizhen; Sun, Fengyong

    2017-11-01

    A real-time optimization control method is proposed to extend turbo-fan engine service life. This real-time optimization control is based on an on-board engine mode, which is devised by a MRR-LSSVR (multi-input multi-output recursive reduced least squares support vector regression method). To solve the optimization problem, a FSQP (feasible sequential quadratic programming) algorithm is utilized. The thermal mechanical fatigue is taken into account during the optimization process. Furthermore, to describe the engine life decaying, a thermal mechanical fatigue model of engine acceleration process is established. The optimization objective function not only contains the sub-item which can get fast response of the engine, but also concludes the sub-item of the total mechanical strain range which has positive relationship to engine fatigue life. Finally, the simulations of the conventional optimization control which just consider engine acceleration performance or the proposed optimization method have been conducted. The simulations demonstrate that the time of the two control methods from idle to 99.5 % of the maximum power are equal. However, the engine life using the proposed optimization method could be surprisingly increased by 36.17 % compared with that using conventional optimization control.

  15. Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters

    NASA Astrophysics Data System (ADS)

    Barnier, G.; Dunham, E. M.

    2016-12-01

    Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.

  16. Data-directed RNA secondary structure prediction using probabilistic modeling

    PubMed Central

    Deng, Fei; Ledda, Mirko; Vaziri, Sana; Aviran, Sharon

    2016-01-01

    Structure dictates the function of many RNAs, but secondary RNA structure analysis is either labor intensive and costly or relies on computational predictions that are often inaccurate. These limitations are alleviated by integration of structure probing data into prediction algorithms. However, existing algorithms are optimized for a specific type of probing data. Recently, new chemistries combined with advances in sequencing have facilitated structure probing at unprecedented scale and sensitivity. These novel technologies and anticipated wealth of data highlight a need for algorithms that readily accommodate more complex and diverse input sources. We implemented and investigated a recently outlined probabilistic framework for RNA secondary structure prediction and extended it to accommodate further refinement of structural information. This framework utilizes direct likelihood-based calculations of pseudo-energy terms per considered structural context and can readily accommodate diverse data types and complex data dependencies. We use real data in conjunction with simulations to evaluate performances of several implementations and to show that proper integration of structural contexts can lead to improvements. Our tests also reveal discrepancies between real data and simulations, which we show can be alleviated by refined modeling. We then propose statistical preprocessing approaches to standardize data interpretation and integration into such a generic framework. We further systematically quantify the information content of data subsets, demonstrating that high reactivities are major drivers of SHAPE-directed predictions and that better understanding of less informative reactivities is key to further improvements. Finally, we provide evidence for the adaptive capability of our framework using mock probe simulations. PMID:27251549

  17. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coimbra, Carlos F. M.

    2016-02-25

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior inmore » real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.« less

  18. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  19. Near-Optimal Tracking Control of Mobile Robots Via Receding-Horizon Dual Heuristic Programming.

    PubMed

    Lian, Chuanqiang; Xu, Xin; Chen, Hong; He, Haibo

    2016-11-01

    Trajectory tracking control of wheeled mobile robots (WMRs) has been an important research topic in control theory and robotics. Although various tracking control methods with stability have been developed for WMRs, it is still difficult to design optimal or near-optimal tracking controller under uncertainties and disturbances. In this paper, a near-optimal tracking control method is presented for WMRs based on receding-horizon dual heuristic programming (RHDHP). In the proposed method, a backstepping kinematic controller is designed to generate desired velocity profiles and the receding horizon strategy is used to decompose the infinite-horizon optimal control problem into a series of finite-horizon optimal control problems. In each horizon, a closed-loop tracking control policy is successively updated using a class of approximate dynamic programming algorithms called finite-horizon dual heuristic programming (DHP). The convergence property of the proposed method is analyzed and it is shown that the tracking control system based on RHDHP is asymptotically stable by using the Lyapunov approach. Simulation results on three tracking control problems demonstrate that the proposed method has improved control performance when compared with conventional model predictive control (MPC) and DHP. It is also illustrated that the proposed method has lower computational burden than conventional MPC, which is very beneficial for real-time tracking control.

  20. Recent Advances in Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Gates, David; Brown, T.; Breslau, J.; Landreman, M.; Lazerson, S. A.; Mynick, H.; Neilson, G. H.; Pomphrey, N.

    2016-10-01

    Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. One criticism that has been levelled at this method of design is the complexity of the resultant field coils. Recently, a new coil optimization code, COILOPT + + , was written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. We have also explored possibilities for generating an experimental database that could check whether the reduction in turbulent transport that is predicted by GENE as a function of local shear would be consistent with experiments. To this end, a series of equilibria that can be made in the now latent QUASAR experiment have been identified. This work was supported by U.S. DoE Contract #DE-AC02-09CH11466.

  1. Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel R.

    2001-01-01

    The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.

  2. Multistage classification of multispectral Earth observational data: The design approach

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Muasher, M. J.; Landgrebe, D. A.

    1981-01-01

    An algorithm is proposed which predicts the optimal features at every node in a binary tree procedure. The algorithm estimates the probability of error by approximating the area under the likelihood ratio function for two classes and taking into account the number of training samples used in estimating each of these two classes. Some results on feature selection techniques, particularly in the presence of a very limited set of training samples, are presented. Results comparing probabilities of error predicted by the proposed algorithm as a function of dimensionality as compared to experimental observations are shown for aircraft and LANDSAT data. Results are obtained for both real and simulated data. Finally, two binary tree examples which use the algorithm are presented to illustrate the usefulness of the procedure.

  3. In-use activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks.

    PubMed

    Sandhu, Gurdas S; Frey, H Christopher; Bartelt-Hunt, Shannon; Jones, Elizabeth

    2015-03-01

    The objectives of this study were to quantify real-world activity, fuel use, and emissions for heavy duty diesel roll-off refuse trucks; evaluate the contribution of duty cycles and emissions controls to variability in cycle average fuel use and emission rates; quantify the effect of vehicle weight on fuel use and emission rates; and compare empirical cycle average emission rates with the U.S. Environmental Protection Agency's MOVES emission factor model predictions. Measurements were made at 1 Hz on six trucks of model years 2005 to 2012, using onboard systems. The trucks traveled 870 miles, had an average speed of 16 mph, and collected 165 tons of trash. The average fuel economy was 4.4 mpg, which is approximately twice previously reported values for residential trash collection trucks. On average, 50% of time is spent idling and about 58% of emissions occur in urban areas. Newer trucks with selective catalytic reduction and diesel particulate filter had NOx and PM cycle average emission rates that were 80% lower and 95% lower, respectively, compared to older trucks without. On average, the combined can and trash weight was about 55% of chassis weight. The marginal effect of vehicle weight on fuel use and emissions is highest at low loads and decreases as load increases. Among 36 cycle average rates (6 trucks×6 cycles), MOVES-predicted values and estimates based on real-world data have similar relative trends. MOVES-predicted CO2 emissions are similar to those of the real world, while NOx and PM emissions are, on average, 43% lower and 300% higher, respectively. The real-world data presented here can be used to estimate benefits of replacing old trucks with new trucks. Further, the data can be used to improve emission inventories and model predictions. In-use measurements of the real-world activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks can be used to improve the accuracy of predictive models, such as MOVES, and emissions inventories. Further, the activity data from this study can be used to generate more representative duty cycles for more accurate chassis dynamometer testing. Comparisons of old and new model year diesel trucks are useful in analyzing the effect of fleet turnover. The analysis of effect of haul weight on fuel use can be used by fleet managers to optimize operations to reduce fuel cost.

  4. Optimizing the real-time ground level enhancement alert system based on neutron monitor measurements: Introducing GLE Alert Plus

    NASA Astrophysics Data System (ADS)

    Souvatzoglou, G.; Papaioannou, A.; Mavromichalaki, H.; Dimitroulakos, J.; Sarlanis, C.

    2014-11-01

    Whenever a significant intensity increase is being recorded by at least three neutron monitor stations in real-time mode, a ground level enhancement (GLE) event is marked and an automated alert is issued. Although, the physical concept of the algorithm is solid and has efficiently worked in a number of cases, the availability of real-time data is still an open issue and makes timely GLE alerts quite challenging. In this work we present the optimization of the GLE alert that has been set into operation since 2006 at the Athens Neutron Monitor Station. This upgrade has led to GLE Alert Plus, which is currently based upon the Neutron Monitor Database (NMDB). We have determined the critical values per station allowing us to issue reliable GLE alerts close to the initiation of the event while at the same time we keep the false alert rate at low levels. Furthermore, we have managed to treat the problem of data availability, introducing the Go-Back-N algorithm. A total of 13 GLE events have been marked from January 2000 to December 2012. GLE Alert Plus issued an alert for 12 events. These alert times are compared to the alert times of GOES Space Weather Prediction Center and Solar Energetic Particle forecaster of the University of Málaga (UMASEP). In all cases GLE Alert Plus precedes the GOES alert by ≈8-52 min. The comparison with UMASEP demonstrated a remarkably good agreement. Real-time GLE alerts by GLE Alert Plus may be retrieved by http://cosray.phys.uoa.gr/gle_alert_plus.html, http://www.nmdb.eu, and http://swe.ssa.esa.int/web/guest/space-radiation. An automated GLE alert email notification system is also available to interested users.

  5. How gamma radiation processing systems are benefiting from the latest advances in information technology

    NASA Astrophysics Data System (ADS)

    Gibson, Wayne H.; Levesque, Daniel

    2000-03-01

    This paper discusses how gamma irradiation plants are putting the latest advances in computer and information technology to use for better process control, cost savings, and strategic advantages. Some irradiator operations are gaining significant benefits by integrating computer technology and robotics with real-time information processing, multi-user databases, and communication networks. The paper reports on several irradiation facilities that are making good use of client/server LANs, user-friendly graphics interfaces, supervisory control and data acquisition (SCADA) systems, distributed I/O with real-time sensor devices, trending analysis, real-time product tracking, dynamic product scheduling, and automated dosimetry reading. These plants are lowering costs by fast and reliable reconciliation of dosimetry data, easier validation to GMP requirements, optimizing production flow, and faster release of sterilized products to market. There is a trend in the manufacturing sector towards total automation using "predictive process control". Real-time verification of process parameters "on-the-run" allows control parameters to be adjusted appropriately, before the process strays out of limits. Applying this technology to the gamma radiation process, control will be based on monitoring the key parameters such as time, and making adjustments during the process to optimize quality and throughput. Dosimetry results will be used as a quality control measurement rather than as a final monitor for the release of the product. Results are correlated with the irradiation process data to quickly and confidently reconcile variations. Ultimately, a parametric process control system utilizing responsive control, feedback and verification will not only increase productivity and process efficiency, but can also result in operating within tighter dose control set points.

  6. Optimizing the Detection of Wakeful and Sleep-Like States for Future Electrocorticographic Brain Computer Interface Applications.

    PubMed

    Pahwa, Mrinal; Kusner, Matthew; Hacker, Carl D; Bundy, David T; Weinberger, Kilian Q; Leuthardt, Eric C

    2015-01-01

    Previous studies suggest stable and robust control of a brain-computer interface (BCI) can be achieved using electrocorticography (ECoG). Translation of this technology from the laboratory to the real world requires additional methods that allow users operate their ECoG-based BCI autonomously. In such an environment, users must be able to perform all tasks currently performed by the experimenter, including manually switching the BCI system on/off. Although a simple task, it can be challenging for target users (e.g., individuals with tetraplegia) due to severe motor disability. In this study, we present an automated and practical strategy to switch a BCI system on or off based on the cognitive state of the user. Using a logistic regression, we built probabilistic models that utilized sub-dural ECoG signals from humans to estimate in pseudo real-time whether a person is awake or in a sleep-like state, and subsequently, whether to turn a BCI system on or off. Furthermore, we constrained these models to identify the optimal anatomical and spectral parameters for delineating states. Other methods exist to differentiate wake and sleep states using ECoG, but none account for practical requirements of BCI application, such as minimizing the size of an ECoG implant and predicting states in real time. Our results demonstrate that, across 4 individuals, wakeful and sleep-like states can be classified with over 80% accuracy (up to 92%) in pseudo real-time using high gamma (70-110 Hz) band limited power from only 5 electrodes (platinum discs with a diameter of 2.3 mm) located above the precentral and posterior superior temporal gyrus.

  7. Developing a methodology for real-time trading of water withdrawal and waste load discharge permits in rivers.

    PubMed

    Soltani, Maryam; Kerachian, Reza

    2018-04-15

    In this paper, a new methodology is proposed for the real-time trading of water withdrawal and waste load discharge permits in agricultural areas along the rivers. Total Dissolved Solids (TDS) is chosen as an indicator of river water quality and the TDS load that agricultural water users discharge to the river are controlled by storing a part of return flows in some evaporation ponds. Available surface water withdrawal and waste load discharge permits are determined using a non-linear multi-objective optimization model. Total available permits are then fairly reallocated among agricultural water users, proportional to their arable lands. Water users can trade their water withdrawal and waste load discharge permits simultaneously, in a bilateral, step by step framework, which takes advantage of differences in their water use efficiencies and agricultural return flow rates. A trade that would take place at each time step results in either more benefit or less diverted return flow. The Nucleolus cooperative game is used to redistribute the benefits generated through trades in different time steps. The proposed methodology is applied to PayePol region in the Karkheh River catchment, southwest Iran. Predicting that 1922.7 Million Cubic Meters (MCM) of annual flow is available to agricultural lands at the beginning of the cultivation year, the real-time optimization model estimates the total annual benefit to reach 46.07 million US Dollars (USD), which requires 6.31 MCM of return flow to be diverted to the evaporation ponds. Fair reallocation of the permits, changes these values to 35.38 million USD and 13.69 MCM, respectively. Results illustrate the effectiveness of the proposed methodology in the real-time water and waste load allocation and simultaneous trading of permits. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  9. A Novel RSSI Prediction Using Imperialist Competition Algorithm (ICA), Radial Basis Function (RBF) and Firefly Algorithm (FFA) in Wireless Networks

    PubMed Central

    Goudarzi, Shidrokh; Haslina Hassan, Wan; Abdalla Hashim, Aisha-Hassan; Soleymani, Seyed Ahmad; Anisi, Mohammad Hossein; Zakaria, Omar M.

    2016-01-01

    This study aims to design a vertical handover prediction method to minimize unnecessary handovers for a mobile node (MN) during the vertical handover process. This relies on a novel method for the prediction of a received signal strength indicator (RSSI) referred to as IRBF-FFA, which is designed by utilizing the imperialist competition algorithm (ICA) to train the radial basis function (RBF), and by hybridizing with the firefly algorithm (FFA) to predict the optimal solution. The prediction accuracy of the proposed IRBF–FFA model was validated by comparing it to support vector machines (SVMs) and multilayer perceptron (MLP) models. In order to assess the model’s performance, we measured the coefficient of determination (R2), correlation coefficient (r), root mean square error (RMSE) and mean absolute percentage error (MAPE). The achieved results indicate that the IRBF–FFA model provides more precise predictions compared to different ANNs, namely, support vector machines (SVMs) and multilayer perceptron (MLP). The performance of the proposed model is analyzed through simulated and real-time RSSI measurements. The results also suggest that the IRBF–FFA model can be applied as an efficient technique for the accurate prediction of vertical handover. PMID:27438600

  10. A Novel RSSI Prediction Using Imperialist Competition Algorithm (ICA), Radial Basis Function (RBF) and Firefly Algorithm (FFA) in Wireless Networks.

    PubMed

    Goudarzi, Shidrokh; Haslina Hassan, Wan; Abdalla Hashim, Aisha-Hassan; Soleymani, Seyed Ahmad; Anisi, Mohammad Hossein; Zakaria, Omar M

    2016-01-01

    This study aims to design a vertical handover prediction method to minimize unnecessary handovers for a mobile node (MN) during the vertical handover process. This relies on a novel method for the prediction of a received signal strength indicator (RSSI) referred to as IRBF-FFA, which is designed by utilizing the imperialist competition algorithm (ICA) to train the radial basis function (RBF), and by hybridizing with the firefly algorithm (FFA) to predict the optimal solution. The prediction accuracy of the proposed IRBF-FFA model was validated by comparing it to support vector machines (SVMs) and multilayer perceptron (MLP) models. In order to assess the model's performance, we measured the coefficient of determination (R2), correlation coefficient (r), root mean square error (RMSE) and mean absolute percentage error (MAPE). The achieved results indicate that the IRBF-FFA model provides more precise predictions compared to different ANNs, namely, support vector machines (SVMs) and multilayer perceptron (MLP). The performance of the proposed model is analyzed through simulated and real-time RSSI measurements. The results also suggest that the IRBF-FFA model can be applied as an efficient technique for the accurate prediction of vertical handover.

  11. Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks

    PubMed Central

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism. PMID:23864830

  12. Systems and methods for energy cost optimization in a building system

    DOEpatents

    Turney, Robert D.; Wenzel, Michael J.

    2016-09-06

    Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.

  13. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  14. Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed

    NASA Astrophysics Data System (ADS)

    Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy

    2015-09-01

    Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.

  15. Decision-support models for empiric antibiotic selection in Gram-negative bloodstream infections.

    PubMed

    MacFadden, D R; Coburn, B; Shah, N; Robicsek, A; Savage, R; Elligsen, M; Daneman, N

    2018-04-25

    Early empiric antibiotic therapy in patients can improve clinical outcomes in Gram-negative bacteraemia. However, the widespread prevalence of antibiotic-resistant pathogens compromises our ability to provide adequate therapy while minimizing use of broad antibiotics. We sought to determine whether readily available electronic medical record data could be used to develop predictive models for decision support in Gram-negative bacteraemia. We performed a multi-centre cohort study, in Canada and the USA, of hospitalized patients with Gram-negative bloodstream infection from April 2010 to March 2015. We analysed multivariable models for prediction of antibiotic susceptibility at two empiric windows: Gram-stain-guided and pathogen-guided treatment. Decision-support models for empiric antibiotic selection were developed based on three clinical decision thresholds of acceptable adequate coverage (80%, 90% and 95%). A total of 1832 patients with Gram-negative bacteraemia were evaluated. Multivariable models showed good discrimination across countries and at both Gram-stain-guided (12 models, areas under the curve (AUCs) 0.68-0.89, optimism-corrected AUCs 0.63-0.85) and pathogen-guided (12 models, AUCs 0.75-0.98, optimism-corrected AUCs 0.64-0.95) windows. Compared to antibiogram-guided therapy, decision-support models of antibiotic selection incorporating individual patient characteristics and prior culture results have the potential to increase use of narrower-spectrum antibiotics (in up to 78% of patients) while reducing inadequate therapy. Multivariable models using readily available epidemiologic factors can be used to predict antimicrobial susceptibility in infecting pathogens with reasonable discriminatory ability. Implementation of sequential predictive models for real-time individualized empiric antibiotic decision-making has the potential to both optimize adequate coverage for patients while minimizing overuse of broad-spectrum antibiotics, and therefore requires further prospective evaluation. Readily available epidemiologic risk factors can be used to predict susceptibility of Gram-negative organisms among patients with bacteraemia, using automated decision-making models. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  16. Active optimal control strategies for increasing the efficiency of photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Aljoaba, Sharif Zidan Ahmad

    Energy consumption has increased drastically during the last century. Currently, the worldwide energy consumption is about 17.4 TW and is predicted to reach 25 TW by 2035. Solar energy has emerged as one of the potential renewable energy sources. Since its first physical recognition in 1887 by Adams and Day till nowadays, research in solar energy is continuously developing. This has lead to many achievements and milestones that introduced it as one of the most reliable and sustainable energy sources. Recently, the International Energy Agency declared that solar energy is predicted to be one of the major electricity production energy sources by 2035. Enhancing the efficiency and lifecycle of photovoltaic (PV) modules leads to significant cost reduction. Reducing the temperature of the PV module improves its efficiency and enhances its lifecycle. To better understand the PV module performance, it is important to study the interaction between the output power and the temperature. A model that is capable of predicting the PV module temperature and its effects on the output power considering the individual contribution of the solar spectrum wavelengths significantly advances the PV module edsigns toward higher efficiency. In this work, a thermoelectrical model is developed to predict the effects of the solar spectrum wavelengths on the PV module performance. The model is characterized and validated under real meteorological conditions where experimental temperature and output power of the PV module measurements are shown to agree with the predicted results. The model is used to validate the concept of active optical filtering. Since this model is wavelength-based, it is used to design an active optical filter for PV applications. Applying this filter to the PV module is expected to increase the output power of the module by filtering the spectrum wavelengths. The active filter performance is optimized, where different cutoff wavelengths are used to maximize the module output power. It is predicted that if the optimized active optical filter is applied to the PV module, the module efficiency is predicted to increase by about 1%. Different technologies are considered for physical implementation of the active optical filter.

  17. Optimization in Quaternion Dynamic Systems: Gradient, Hessian, and Learning Algorithms.

    PubMed

    Xu, Dongpo; Xia, Yili; Mandic, Danilo P

    2016-02-01

    The optimization of real scalar functions of quaternion variables, such as the mean square error or array output power, underpins many practical applications. Solutions typically require the calculation of the gradient and Hessian. However, real functions of quaternion variables are essentially nonanalytic, which are prohibitive to the development of quaternion-valued learning systems. To address this issue, we propose new definitions of quaternion gradient and Hessian, based on the novel generalized Hamilton-real (GHR) calculus, thus making a possible efficient derivation of general optimization algorithms directly in the quaternion field, rather than using the isomorphism with the real domain, as is current practice. In addition, unlike the existing quaternion gradients, the GHR calculus allows for the product and chain rule, and for a one-to-one correspondence of the novel quaternion gradient and Hessian with their real counterparts. Properties of the quaternion gradient and Hessian relevant to numerical applications are also introduced, opening a new avenue of research in quaternion optimization and greatly simplified the derivations of learning algorithms. The proposed GHR calculus is shown to yield the same generic algorithm forms as the corresponding real- and complex-valued algorithms. Advantages of the proposed framework are illuminated over illustrative simulations in quaternion signal processing and neural networks.

  18. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  19. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  20. A data acquisition protocol for a reactive wireless sensor network monitoring application.

    PubMed

    Aderohunmu, Femi A; Brunelli, Davide; Deng, Jeremiah D; Purvis, Martin K

    2015-04-30

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain.

  1. A Data Acquisition Protocol for a Reactive Wireless Sensor Network Monitoring Application

    PubMed Central

    Aderohunmu, Femi A.; Brunelli, Davide; Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain. PMID:25942642

  2. ELM Meets Urban Big Data Analysis: Case Studies

    PubMed Central

    Chen, Huajun; Chen, Jiaoyan

    2016-01-01

    In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203

  3. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  4. Social Trust Prediction Using Heterogeneous Networks

    PubMed Central

    HUANG, JIN; NIE, FEIPING; HUANG, HENG; TU, YI-CHENG; LEI, YU

    2014-01-01

    Along with increasing popularity of social websites, online users rely more on the trustworthiness information to make decisions, extract and filter information, and tag and build connections with other users. However, such social network data often suffer from severe data sparsity and are not able to provide users with enough information. Therefore, trust prediction has emerged as an important topic in social network research. Traditional approaches are primarily based on exploring trust graph topology itself. However, research in sociology and our life experience suggest that people who are in the same social circle often exhibit similar behaviors and tastes. To take advantage of the ancillary information for trust prediction, the challenge then becomes what to transfer and how to transfer. In this article, we address this problem by aggregating heterogeneous social networks and propose a novel joint social networks mining (JSNM) method. Our new joint learning model explores the user-group-level similarity between correlated graphs and simultaneously learns the individual graph structure; therefore, the shared structures and patterns from multiple social networks can be utilized to enhance the prediction tasks. As a result, we not only improve the trust prediction in the target graph but also facilitate other information retrieval tasks in the auxiliary graphs. To optimize the proposed objective function, we use the alternative technique to break down the objective function into several manageable subproblems. We further introduce the auxiliary function to solve the optimization problems with rigorously proved convergence. The extensive experiments have been conducted on both synthetic and real- world data. All empirical results demonstrate the effectiveness of our method. PMID:24729776

  5. Stabilizing l1-norm prediction models by supervised feature grouping.

    PubMed

    Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha

    2016-02-01

    Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Social Trust Prediction Using Heterogeneous Networks.

    PubMed

    Huang, Jin; Nie, Feiping; Huang, Heng; Tu, Yi-Cheng; Lei, Yu

    2013-11-01

    Along with increasing popularity of social websites, online users rely more on the trustworthiness information to make decisions, extract and filter information, and tag and build connections with other users. However, such social network data often suffer from severe data sparsity and are not able to provide users with enough information. Therefore, trust prediction has emerged as an important topic in social network research. Traditional approaches are primarily based on exploring trust graph topology itself. However, research in sociology and our life experience suggest that people who are in the same social circle often exhibit similar behaviors and tastes. To take advantage of the ancillary information for trust prediction, the challenge then becomes what to transfer and how to transfer. In this article, we address this problem by aggregating heterogeneous social networks and propose a novel joint social networks mining (JSNM) method. Our new joint learning model explores the user-group-level similarity between correlated graphs and simultaneously learns the individual graph structure; therefore, the shared structures and patterns from multiple social networks can be utilized to enhance the prediction tasks. As a result, we not only improve the trust prediction in the target graph but also facilitate other information retrieval tasks in the auxiliary graphs. To optimize the proposed objective function, we use the alternative technique to break down the objective function into several manageable subproblems. We further introduce the auxiliary function to solve the optimization problems with rigorously proved convergence. The extensive experiments have been conducted on both synthetic and real- world data. All empirical results demonstrate the effectiveness of our method.

  7. Deviation from symmetrically self-similar branching in trees predicts altered hydraulics, mechanics, light interception and metabolic scaling.

    PubMed

    Smith, Duncan D; Sperry, John S; Enquist, Brian J; Savage, Van M; McCulloh, Katherine A; Bentley, Lisa P

    2014-01-01

    The West, Brown, Enquist (WBE) model derives symmetrically self-similar branching to predict metabolic scaling from hydraulic conductance, K, (a metabolism proxy) and tree mass (or volume, V). The original prediction was Kα V(0.75). We ask whether trees differ from WBE symmetry and if it matters for plant function and scaling. We measure tree branching and model how architecture influences K, V, mechanical stability, light interception and metabolic scaling. We quantified branching architecture by measuring the path fraction, Pf : mean/maximum trunk-to-twig pathlength. WBE symmetry produces the maximum, Pf = 1.0. We explored tree morphospace using a probability-based numerical model constrained only by biomechanical principles. Real tree Pf ranged from 0.930 (nearly symmetric) to 0.357 (very asymmetric). At each modeled tree size, a reduction in Pf led to: increased K; decreased V; increased mechanical stability; and decreased light absorption. When Pf was ontogenetically constant, strong asymmetry only slightly steepened metabolic scaling. The Pf ontogeny of real trees, however, was 'U' shaped, resulting in size-dependent metabolic scaling that exceeded 0.75 in small trees before falling below 0.65. Architectural diversity appears to matter considerably for whole-tree hydraulics, mechanics, photosynthesis and potentially metabolic scaling. Optimal architectures likely exist that maximize carbon gain per structural investment. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  8. An improved simulation of the 2015 El Niño event by optimally correcting the initial conditions and model parameters in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan

    2017-09-01

    Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.

  9. Optimal combinations of control strategies and cost-effective analysis for visceral leishmaniasis disease transmission.

    PubMed

    Biswas, Santanu; Subramanian, Abhishek; ELMojtaba, Ibrahim M; Chattopadhyay, Joydev; Sarkar, Ram Rup

    2017-01-01

    Visceral leishmaniasis (VL) is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.

  10. Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Xu, Yan; Tomsovic, Kevin

    In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less

  11. Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization

    DOE PAGES

    Liu, Guodong; Xu, Yan; Tomsovic, Kevin

    2016-01-01

    In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less

  12. Combinatorial therapy discovery using mixed integer linear programming.

    PubMed

    Pang, Kaifang; Wan, Ying-Wooi; Choi, William T; Donehower, Lawrence A; Sun, Jingchun; Pant, Dhruv; Liu, Zhandong

    2014-05-15

    Combinatorial therapies play increasingly important roles in combating complex diseases. Owing to the huge cost associated with experimental methods in identifying optimal drug combinations, computational approaches can provide a guide to limit the search space and reduce cost. However, few computational approaches have been developed for this purpose, and thus there is a great need of new algorithms for drug combination prediction. Here we proposed to formulate the optimal combinatorial therapy problem into two complementary mathematical algorithms, Balanced Target Set Cover (BTSC) and Minimum Off-Target Set Cover (MOTSC). Given a disease gene set, BTSC seeks a balanced solution that maximizes the coverage on the disease genes and minimizes the off-target hits at the same time. MOTSC seeks a full coverage on the disease gene set while minimizing the off-target set. Through simulation, both BTSC and MOTSC demonstrated a much faster running time over exhaustive search with the same accuracy. When applied to real disease gene sets, our algorithms not only identified known drug combinations, but also predicted novel drug combinations that are worth further testing. In addition, we developed a web-based tool to allow users to iteratively search for optimal drug combinations given a user-defined gene set. Our tool is freely available for noncommercial use at http://www.drug.liuzlab.org/. zhandong.liu@bcm.edu Supplementary data are available at Bioinformatics online.

  13. Distributed cerebellar plasticity implements generalized multiple-scale memory components in real-robot sensorimotor tasks.

    PubMed

    Casellato, Claudia; Antonietti, Alberto; Garrido, Jesus A; Ferrigno, Giancarlo; D'Angelo, Egidio; Pedrocchi, Alessandra

    2015-01-01

    The cerebellum plays a crucial role in motor learning and it acts as a predictive controller. Modeling it and embedding it into sensorimotor tasks allows us to create functional links between plasticity mechanisms, neural circuits and behavioral learning. Moreover, if applied to real-time control of a neurorobot, the cerebellar model has to deal with a real noisy and changing environment, thus showing its robustness and effectiveness in learning. A biologically inspired cerebellar model with distributed plasticity, both at cortical and nuclear sites, has been used. Two cerebellum-mediated paradigms have been designed: an associative Pavlovian task and a vestibulo-ocular reflex, with multiple sessions of acquisition and extinction and with different stimuli and perturbation patterns. The cerebellar controller succeeded to generate conditioned responses and finely tuned eye movement compensation, thus reproducing human-like behaviors. Through a productive plasticity transfer from cortical to nuclear sites, the distributed cerebellar controller showed in both tasks the capability to optimize learning on multiple time-scales, to store motor memory and to effectively adapt to dynamic ranges of stimuli.

  14. Application of Spatial Neural Network Model for Optimal Operation of Urban Drainage System

    NASA Astrophysics Data System (ADS)

    KIM, B. J.; Lee, J. Y.; KIM, H. I.; Son, A. L.; Han, K. Y.

    2017-12-01

    The significance of real-time operation of drainage pump and warning system for inundation becomes recently increased in order to coping with runoff by high intensity precipitation such as localized heavy rain that frequently and suddenly happen. However existing operation of drainage pump station has been made a decision according to opinion of manager based on stage because of not expecting exact time that peak discharge occur in pump station. Therefore the scale of pump station has been excessively estimated. Although it is necessary to perform quick and accurate inundation in analysis downtown area due to huge property damage from flood and typhoon, previous studies contained risk deducting incorrect result that differs from actual result owing to the diffusion aspect of flow by effect on building and road. The purpose of this study is to develop the data driven model for the real-time operation of drainage pump station and two-dimensional inundation analysis that are improved the problems of the existing hydrology and hydrological model. Neuro-Fuzzy system for real time prediction about stage was developed by estimating the type and number of membership function. Based on forecasting stage, it was decided when pump machine begin to work and how much water scoop up by using penalizing genetic algorithm. It is practicable to forecast stage, optimize pump operation and simulate inundation analysis in real time through the methodologies suggested in this study. This study can greatly contribute to the establishment of disaster information map that prevent and mitigate inundation in urban drainage area. The applicability of the development model for the five drainage pump stations in the Mapo drainage area was verified. It is considered to be able to effectively manage urban drainage facilities in the development of these operating rules. Keywords : Urban flooding; Geo-ANFIS method; Optimal operation; Drainage system; AcknowlegementThis research was supported by a grant (17AWMP-B079625-04) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  15. Support Vector Hazards Machine: A Counting Process Framework for Learning Risk Scores for Censored Outcomes.

    PubMed

    Wang, Yuanjia; Chen, Tianle; Zeng, Donglin

    2016-01-01

    Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.

  16. Real-time sensing of fatigue crack damage for information-based decision and control

    NASA Astrophysics Data System (ADS)

    Keller, Eric Evans

    Information-based decision and control for structures that are subject to failure by fatigue cracking is based on the following notion: Maintenance, usage scheduling, and control parameter tuning can be optimized through real time knowledge of the current state of fatigue crack damage. Additionally, if the material properties of a mechanical structure can be identified within a smaller range, then the remaining life prediction of that structure will be substantially more accurate. Information-based decision systems can rely one physical models, estimation of material properties, exact knowledge of usage history, and sensor data to synthesize an accurate snapshot of the current state of damage and the likely remaining life of a structure under given assumed loading. The work outlined in this thesis is structured to enhance the development of information-based decision and control systems. This is achieved by constructing a test facility for laboratory experiments on real-time damage sensing. This test facility makes use of a methodology that has been formulated for fatigue crack model parameter estimation and significantly improves the quality of predictions of remaining life. Specifically, the thesis focuses on development of an on-line fatigue crack damage sensing and life prediction system that is built upon the disciplines of Systems Sciences and Mechanics of Materials. A major part of the research effort has been expended to design and fabricate a test apparatus which allows: (i) measurement and recording of statistical data for fatigue crack growth in metallic materials via different sensing techniques; and (ii) identification of stochastic model parameters for prediction of fatigue crack damage. To this end, this thesis describes the test apparatus and the associated instrumentation based on four different sensing techniques, namely, traveling optical microscopy, ultrasonic flaw detection, Alternating Current Potential Drop (ACPD), and fiber-optic extensometry-based compliance, for crack length measurements.

  17. Using simplifications of reality in the real world: Robust benefits of models for decision making

    NASA Astrophysics Data System (ADS)

    Hunt, R. J.

    2008-12-01

    Models are by definition simplifications of reality; the degree and nature of simplification, however, is debated. One view is "the world is 3D, heterogeneous, and transient, thus good models are too" - the more a model directly simulates the complexity of the real world the better it is considered to be. An alternative view is to only use simple models up front because real-world complexity can never be truly known. A third view is construct and calibrate as many models as predictions. A fourth is to build highly parameterized models and either look at an ensemble of results, or use mathematical regularization to identify an optimal most reasonable parameter set and fit. Although each view may have utility for a given decision-making process, there are common threads that perhaps run through all views. First, the model-construction process itself can help the decision-making process because it raises the discussion of opposing parties from one of contrasting professional opinions to discussion of reasonable types and ranges of model inputs and processes. Secondly, no matter what view is used to guide the model building, model predictions for the future might be expected to perform poorly in the future due to unanticipated future changes and stressors to the underlying system simulated. Although this does not reduce the obligation of the modeler to build representative tools for the system, it should serve to temper expectations of model performance. Finally, perhaps the most under-appreciated utility of models is for calculating the reduction in prediction uncertainty resulting from different data collection strategies - an attractive feature separate from the calculation and minimization of absolute prediction uncertainty itself. This type of model output facilitates focusing on efficient use of current and future monitoring resources - something valued by many decision-makers regardless of background, system managed, and societal context.

  18. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Multivariate predictors of music perception and appraisal by adult cochlear implant users.

    PubMed

    Gfeller, Kate; Oleson, Jacob; Knutson, John F; Breheny, Patrick; Driscoll, Virginia; Olszewski, Carol

    2008-02-01

    The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music.

  20. Linear regression models for solvent accessibility prediction in proteins.

    PubMed

    Wagner, Michael; Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław

    2005-04-01

    The relative solvent accessibility (RSA) of an amino acid residue in a protein structure is a real number that represents the solvent exposed surface area of this residue in relative terms. The problem of predicting the RSA from the primary amino acid sequence can therefore be cast as a regression problem. Nevertheless, RSA prediction has so far typically been cast as a classification problem. Consequently, various machine learning techniques have been used within the classification framework to predict whether a given amino acid exceeds some (arbitrary) RSA threshold and would thus be predicted to be "exposed," as opposed to "buried." We have recently developed novel methods for RSA prediction using nonlinear regression techniques which provide accurate estimates of the real-valued RSA and outperform classification-based approaches with respect to commonly used two-class projections. However, while their performance seems to provide a significant improvement over previously published approaches, these Neural Network (NN) based methods are computationally expensive to train and involve several thousand parameters. In this work, we develop alternative regression models for RSA prediction which are computationally much less expensive, involve orders-of-magnitude fewer parameters, and are still competitive in terms of prediction quality. In particular, we investigate several regression models for RSA prediction using linear L1-support vector regression (SVR) approaches as well as standard linear least squares (LS) regression. Using rigorously derived validation sets of protein structures and extensive cross-validation analysis, we compare the performance of the SVR with that of LS regression and NN-based methods. In particular, we show that the flexibility of the SVR (as encoded by metaparameters such as the error insensitivity and the error penalization terms) can be very beneficial to optimize the prediction accuracy for buried residues. We conclude that the simple and computationally much more efficient linear SVR performs comparably to nonlinear models and thus can be used in order to facilitate further attempts to design more accurate RSA prediction methods, with applications to fold recognition and de novo protein structure prediction methods.

  1. Idealized Experiments for Optimizing Model Parameters Using a 4D-Variational Method in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang

    2018-04-01

    Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  2. Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, C.; Zhang, R. H.

    2017-12-01

    Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  3. Drug-drug interaction predictions with PBPK models and optimal multiresponse sampling time designs: application to midazolam and a phase I compound. Part 1: comparison of uniresponse and multiresponse designs using PopDes.

    PubMed

    Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode

    2008-12-01

    To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.

  4. Rapid near-optimal aerospace plane trajectory generation and guidance

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Corban, J. E.; Markopoulos, N.

    1991-01-01

    Effort was directed toward the problems of the real time trajectory optimization and guidance law development for the National Aerospace Plane (NASP) applications. In particular, singular perturbation methods were used to develop guidance algorithms suitable for onboard, real time implementation. The progress made in this research effort is reported.

  5. Optimization of the sources in local hyperthermia using a combined finite element-genetic algorithm method.

    PubMed

    Siauve, N; Nicolas, L; Vollaire, C; Marchal, C

    2004-12-01

    This article describes an optimization process specially designed for local and regional hyperthermia in order to achieve the desired specific absorption rate in the patient. It is based on a genetic algorithm coupled to a finite element formulation. The optimization method is applied to real human organs meshes assembled from computerized tomography scans. A 3D finite element formulation is used to calculate the electromagnetic field in the patient, achieved by radiofrequency or microwave sources. Space discretization is performed using incomplete first order edge elements. The sparse complex symmetric matrix equation is solved using a conjugate gradient solver with potential projection pre-conditionning. The formulation is validated by comparison of calculated specific absorption rate distributions in a phantom to temperature measurements. A genetic algorithm is used to optimize the specific absorption rate distribution to predict the phases and amplitudes of the sources leading to the best focalization. The objective function is defined as the specific absorption rate ratio in the tumour and healthy tissues. Several constraints, regarding the specific absorption rate in tumour and the total power in the patient, may be prescribed. Results obtained with two types of applicators (waveguides and annular phased array) are presented and show the faculties of the developed optimization process.

  6. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  7. Localization of magnetic pills

    PubMed Central

    Laulicht, Bryan; Gidmark, Nicholas J.; Tripathi, Anubhav; Mathiowitz, Edith

    2011-01-01

    Numerous therapeutics demonstrate optimal absorption or activity at specific sites in the gastrointestinal (GI) tract. Yet, safe, effective pill retention within a desired region of the GI remains an elusive goal. We report a safe, effective method for localizing magnetic pills. To ensure safety and efficacy, we monitor and regulate attractive forces between a magnetic pill and an external magnet, while visualizing internal dose motion in real time using biplanar videofluoroscopy. Real-time monitoring yields direct visual confirmation of localization completely noninvasively, providing a platform for investigating the therapeutic benefits imparted by localized oral delivery of new and existing drugs. Additionally, we report the in vitro measurements and calculations that enabled prediction of successful magnetic localization in the rat small intestines for 12 h. The designed system for predicting and achieving successful magnetic localization can readily be applied to any area of the GI tract within any species, including humans. The described system represents a significant step forward in the ability to localize magnetic pills safely and effectively anywhere within the GI tract. What our magnetic pill localization strategy adds to the state of the art, if used as an oral drug delivery system, is the ability to monitor the force exerted by the pill on the tissue and to locate the magnetic pill within the test subject all in real time. This advance ensures both safety and efficacy of magnetic localization during the potential oral administration of any magnetic pill-based delivery system. PMID:21257903

  8. Multicolor fluorescent intravital live microscopy (FILM) for surgical tumor resection in a mouse xenograft model.

    PubMed

    Thurber, Greg M; Figueiredo, Jose L; Weissleder, Ralph

    2009-11-30

    Complete surgical resection of neoplasia remains one of the most efficient tumor therapies. However, malignant cell clusters are often left behind during surgery due to the inability to visualize and differentiate them against host tissue. Here we establish the feasibility of multicolor fluorescent intravital live microscopy (FILM) where multiple cellular and/or unique tissue compartments are stained simultaneously and imaged in real time. Theoretical simulations of imaging probe localization were carried out for three agents with specificity for cancer cells, stromal host response, or vascular perfusion. This transport analysis gave insight into the probe pharmacokinetics and tissue distribution, facilitating the experimental design and allowing predictions to be made about the localization of the probes in other animal models and in the clinic. The imaging probes were administered systemically at optimal time points based on the simulations, and the multicolor FILM images obtained in vivo were then compared to conventional pathological sections. Our data show the feasibility of real time in vivo pathology at cellular resolution and molecular specificity with excellent agreement between intravital and traditional in vitro immunohistochemistry. Multicolor FILM is an accurate method for identifying malignant tissue and cells in vivo. The imaging probes distributed in a manner similar to predictions based on transport principles, and these models can be used to design future probes and experiments. FILM can provide critical real time feedback and should be a useful tool for more effective and complete cancer resection.

  9. Tuning of Kalman filter parameters via genetic algorithm for state-of-charge estimation in battery management system.

    PubMed

    Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.

  10. Heuristic use of perceptual evidence leads to dissociation between performance and metacognitive sensitivity.

    PubMed

    Maniscalco, Brian; Peters, Megan A K; Lau, Hakwan

    2016-04-01

    Zylberberg et al. [Zylberberg, Barttfeld, & Sigman (Frontiers in Integrative Neuroscience, 6; 79, 2012), Frontiers in Integrative Neuroscience 6:79] found that confidence decisions, but not perceptual decisions, are insensitive to evidence against a selected perceptual choice. We present a signal detection theoretic model to formalize this insight, which gave rise to a counter-intuitive empirical prediction: that depending on the observer's perceptual choice, increasing task performance can be associated with decreasing metacognitive sensitivity (i.e., the trial-by-trial correspondence between confidence and accuracy). The model also provides an explanation as to why metacognitive sensitivity tends to be less than optimal in actual subjects. These predictions were confirmed robustly in a psychophysics experiment. In a second experiment we found that, in at least some subjects, the effects were replicated even under performance feedback designed to encourage optimal behavior. However, some subjects did show improvement under feedback, suggesting the tendency to ignore evidence against a selected perceptual choice may be a heuristic adopted by the perceptual decision-making system, rather than reflecting inherent biological limitations. We present a Bayesian modeling framework that explains why this heuristic strategy may be advantageous in real-world contexts.

  11. Tuning of Kalman Filter Parameters via Genetic Algorithm for State-of-Charge Estimation in Battery Management System

    PubMed Central

    Ting, T. O.; Lim, Eng Gee

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041

  12. Quantitative modeling and optimization of magnetic tweezers.

    PubMed

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H

    2009-06-17

    Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply > or = 40 pN stretching forces on approximately 1-microm tethered beads.

  13. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  14. BIOLEACH: Coupled modeling of leachate and biogas production on solid waste landfills

    NASA Astrophysics Data System (ADS)

    Rodrigo-Clavero, Maria-Elena; Rodrigo-Ilarri, Javier

    2015-04-01

    One of the most important factors to address when performing the environmental impact assessment of urban solid waste landfills is to evaluate the leachate production. Leachate management (collection and treatment) is also one of the most relevant economical aspects to take into account during the landfill life. Leachate is formed as a solution of biological and chemical components during operational and post-operational phases on urban solid waste landfills as a combination of different processes that involve water gains and looses inside the solid waste mass. Infiltration of external water coming from precipitation is the most important component on this water balance. However, anaerobic waste decomposition and biogas formation processes play also a role on the balance as water-consuming processes. The production of leachate one biogas is therefore a coupled process. Biogas production models usually consider optimal conditions of water content on the solid waste mass. However, real conditions during the operational phase of the landfill may greatly differ from these optimal conditions. In this work, the first results obtained to predict both the leachate and the biogas production as a single coupled phenomenon on real solid waste landfills are shown. The model is applied on a synthetic case considering typical climatological conditions of Mediterranean catchments.

  15. Experimental investigation into biomechanical and biotribological properties of a real intestine and their significance for design of a spiral-type robotic capsule.

    PubMed

    Zhou, Hao; Alici, Gursel; Than, Trung D; Li, Weihua

    2014-03-01

    This article reports on the results and implications of our experimental investigation into the biomechanical and biotribological properties of a real intestine for the optimal design of a spiral-type robotic capsule. Dynamic shear experiments were conducted to evaluate how the storage and loss moduli and damping factor of the small intestine change with the speed or the angular frequency. The sliding friction between differently shaped test pieces, with a topology similar to that of the spirals, and the intestine sample was experimentally determined. Our findings demonstrate that the intestine's biomechanical and biotribological properties are coupled, suggesting that the sliding friction is strongly related to the internal friction of the intestinal tissue. The significant implication of this finding is that one can predict the reaction force between the capsule with a spiral-type traction topology and the intestine directly from the intestine's biomechanical measurements rather than employing complicated three-dimensional finite element analysis or an inaccurate analytical model. Sliding friction experiments were also conducted with bar-shaped solid samples to determine the sliding friction between the samples and the small intestine. This sliding friction data will be useful in determining spiral material for an optimally designed robotic capsule.

  16. Predictive IP controller for robust position control of linear servo system.

    PubMed

    Lu, Shaowu; Zhou, Fengxing; Ma, Yajie; Tang, Xiaoqi

    2016-07-01

    Position control is a typical application of linear servo system. In this paper, to reduce the system overshoot, an integral plus proportional (IP) controller is used in the position control implementation. To further improve the control performance, a gain-tuning IP controller based on a generalized predictive control (GPC) law is proposed. Firstly, to represent the dynamics of the position loop, a second-order linear model is used and its model parameters are estimated on-line by using a recursive least squares method. Secondly, based on the GPC law, an optimal control sequence is obtained by using receding horizon, then directly supplies the IP controller with the corresponding control parameters in the real operations. Finally, simulation and experimental results are presented to show the efficiency of proposed scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Ensemble forecast of human West Nile virus cases and mosquito infection rates

    NASA Astrophysics Data System (ADS)

    Defelice, Nicholas B.; Little, Eliza; Campbell, Scott R.; Shaman, Jeffrey

    2017-02-01

    West Nile virus (WNV) is now endemic in the continental United States; however, our ability to predict spillover transmission risk and human WNV cases remains limited. Here we develop a model depicting WNV transmission dynamics, which we optimize using a data assimilation method and two observed data streams, mosquito infection rates and reported human WNV cases. The coupled model-inference framework is then used to generate retrospective ensemble forecasts of historical WNV outbreaks in Long Island, New York for 2001-2014. Accurate forecasts of mosquito infection rates are generated before peak infection, and >65% of forecasts accurately predict seasonal total human WNV cases up to 9 weeks before the past reported case. This work provides the foundation for implementation of a statistically rigorous system for real-time forecast of seasonal outbreaks of WNV.

  18. Ensemble forecast of human West Nile virus cases and mosquito infection rates.

    PubMed

    DeFelice, Nicholas B; Little, Eliza; Campbell, Scott R; Shaman, Jeffrey

    2017-02-24

    West Nile virus (WNV) is now endemic in the continental United States; however, our ability to predict spillover transmission risk and human WNV cases remains limited. Here we develop a model depicting WNV transmission dynamics, which we optimize using a data assimilation method and two observed data streams, mosquito infection rates and reported human WNV cases. The coupled model-inference framework is then used to generate retrospective ensemble forecasts of historical WNV outbreaks in Long Island, New York for 2001-2014. Accurate forecasts of mosquito infection rates are generated before peak infection, and >65% of forecasts accurately predict seasonal total human WNV cases up to 9 weeks before the past reported case. This work provides the foundation for implementation of a statistically rigorous system for real-time forecast of seasonal outbreaks of WNV.

  19. ARPA-E: Advancing the Electric Grid

    ScienceCinema

    Lemmon, John; Ruiz, Pablo; Sommerer, Tim; Aziz, Michael

    2018-06-07

    The electric grid was designed with the assumption that all energy generation sources would be relatively controllable, and grid operators would always be able to predict when and where those sources would be located. With the addition of renewable energy sources like wind and solar, which can be installed faster than traditional generation technologies, this is no longer the case. Furthermore, the fact that renewable energy sources are imperfectly predictable means that the grid has to adapt in real-time to changing patterns of power flow. We need a dynamic grid that is far more flexible. This video highlights three ARPA-E-funded approaches to improving the grid's flexibility: topology control software from Boston University that optimizes power flow, gas tube switches from General Electric that provide efficient power conversion, and flow batteries from Harvard University that offer grid-scale energy storage.

  20. Can Subjects be Guided to Optimal Decisions The Use of a Real-Time Training Intervention Model

    DTIC Science & Technology

    2016-06-01

    execution of the task and may then be analyzed to determine if there is correlation between designated factors (scores, proportion of time in each...state with their decision performance in real time could allow training systems to be designed to tailor training to the individual decision maker...release; distribution is unlimited CAN SUBJECTS BE GUIDED TO OPTIMAL DECISIONS? THE USE OF A REAL- TIME TRAINING INTERVENTION MODEL by Travis D

  1. A real-time and closed-loop control algorithm for cascaded multilevel inverter based on artificial neural network.

    PubMed

    Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun

    2014-01-01

    In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.

  2. Real-time monitoring of a coffee roasting process with near infrared spectroscopy using multivariate statistical analysis: A feasibility study.

    PubMed

    Catelani, Tiago A; Santos, João Rodrigo; Páscoa, Ricardo N M J; Pezza, Leonardo; Pezza, Helena R; Lopes, João A

    2018-03-01

    This work proposes the use of near infrared (NIR) spectroscopy in diffuse reflectance mode and multivariate statistical process control (MSPC) based on principal component analysis (PCA) for real-time monitoring of the coffee roasting process. The main objective was the development of a MSPC methodology able to early detect disturbances to the roasting process resourcing to real-time acquisition of NIR spectra. A total of fifteen roasting batches were defined according to an experimental design to develop the MSPC models. This methodology was tested on a set of five batches where disturbances of different nature were imposed to simulate real faulty situations. Some of these batches were used to optimize the model while the remaining was used to test the methodology. A modelling strategy based on a time sliding window provided the best results in terms of distinguishing batches with and without disturbances, resourcing to typical MSPC charts: Hotelling's T 2 and squared predicted error statistics. A PCA model encompassing a time window of four minutes with three principal components was able to efficiently detect all disturbances assayed. NIR spectroscopy combined with the MSPC approach proved to be an adequate auxiliary tool for coffee roasters to detect faults in a conventional roasting process in real-time. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Canine spontaneous glioma: A translational model system for convection-enhanced delivery

    PubMed Central

    Dickinson, Peter J.; LeCouteur, Richard A.; Higgins, Robert J.; Bringas, John R.; Larson, Richard F.; Yamashita, Yoji; Krauze, Michal T.; Forsayeth, John; Noble, Charles O.; Drummond, Daryl C.; Kirpotin, Dmitri B.; Park, John W.; Berger, Mitchel S.; Bankiewicz, Krystof S.

    2010-01-01

    Canine spontaneous intracranial tumors bear striking similarities to their human tumor counterparts and have the potential to provide a large animal model system for more realistic validation of novel therapies typically developed in small rodent models. We used spontaneously occurring canine gliomas to investigate the use of convection-enhanced delivery (CED) of liposomal nanoparticles, containing topoisomerase inhibitor CPT-11. To facilitate visualization of intratumoral infusions by real-time magnetic resonance imaging (MRI), we included identically formulated liposomes loaded with Gadoteridol. Real-time MRI defined distribution of infusate within both tumor and normal brain tissues. The most important limiting factor for volume of distribution within tumor tissue was the leakage of infusate into ventricular or subarachnoid spaces. Decreased tumor volume, tumor necrosis, and modulation of tumor phenotype correlated with volume of distribution of infusate (Vd), infusion location, and leakage as determined by real-time MRI and histopathology. This study demonstrates the potential for canine spontaneous gliomas as a model system for the validation and development of novel therapeutic strategies for human brain tumors. Data obtained from infusions monitored in real time in a large, spontaneous tumor may provide information, allowing more accurate prediction and optimization of infusion parameters. Variability in Vd between tumors strongly suggests that real-time imaging should be an essential component of CED therapeutic trials to allow minimization of inappropriate infusions and accurate assessment of clinical outcomes. PMID:20488958

  4. Lyme Borreliosis--the Utility of Improved Real-Time PCR Assay in the Detection of Borrelia burgdorferi Infections.

    PubMed

    Bil-Lula, Iwona; Matuszek, Patryk; Pfeiffer, Thomas; Woźniak, Mieczysław

    2015-01-01

    Infections of Borrelia burgdorferi sensu lato reveal clinical manifestations affecting numerous organs and tissues. The standard diagnostic procedure of these infections is quite simple if a positive history of tick exposure or typical erythema migrans appears. Lack of unequivocal clinical symptoms creates the necessity for further evaluation with laboratory tests. This study discusses the utility of a novel, improved, well-optimized, sensitive and highly specific quantitative real-time PCR assay for the diagnostics of infections caused by Borrelia burgdorferi sensu lato. We designed an improved, specific, highly sensitive real-time quantitative polymerase chain reaction (RQ-PCR) assay for the detection and quantification of all Borrelia burgdorferi genotypes. A wide validation effort was undertaken to ensure confidence in the highly sensitive and specific detection of B. burgdorferi. Due to high sensitivity and great specificity, as low as 1.6×10² copies of Borrelia per mL of whole blood could be detected. As much as 12 (3%) negative ELISA IgM results, 14 (2.8%) negative results of Line blot IgM, 11 (3.1%) and 7 (2.7%) of negative ELISA IgG and Line blot IgG results, respectively, were positive in real-time PCR. The data in this study confirms the high positive predictive value of real-time PCR test in the detection of Borrelia infections.

  5. Initial validation of the International Crowding Measure in Emergency Departments (ICMED) to measure emergency department crowding.

    PubMed

    Boyle, Adrian; Coleman, James; Sultan, Yasmin; Dhakshinamoorthy, Vijayasankar; O'Keeffe, Jacqueline; Raut, Pramin; Beniuk, Kathleen

    2015-02-01

    Emergency department (ED) crowding is recognised as a major public health problem. While there is agreement that ED crowding harms patients, there is less agreement about the best way to measure ED crowding. We have previously derived an eight-point measure of ED crowding by a formal consensus process, the International Crowding Measure in Emergency Departments (ICMED). We aimed to test the feasibility of collecting this measure in real time and to partially validate this measure. We conducted a cross-sectional study in four EDs in England. We conducted independent observations of the measure and compared these with senior clinician's perceptions of crowding and safety. We obtained 84 measurements spread evenly across the four EDs. The measure was feasible to collect in real time except for the 'Left Before Being Seen' variable. Increasing numbers of violations of the measure were associated with increasing clinician concerns. The area under the receiver operating characteristic curve was 0.80 (95% CI 0.72 to 0.90) for predicting crowding and 0.74 (95% CI 0.60 to 0.89) for predicting danger. The optimal number of violations for predicting crowding was three, with a sensitivity of 91.2 (95% CI 85.1 to 97.2) and a specificity of 100.0 (92.9-100). The measure predicted clinician concerns better than individual variables such as occupancy. The ICMED can easily be collected in multiple EDs with different information technology systems. The ICMED seems to predict clinician's concerns about crowding and safety well, but future work is required to validate this before it can be advocated for widespread use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. A Method to Predict Compressor Stall in the TF34-100 Turbofan Engine Utilizing Real-Time Performance Data

    DTIC Science & Technology

    2015-06-01

    A METHOD TO PREDICT COMPRESSOR STALL IN THE TF34-100 TURBOFAN ENGINE UTILIZING REAL-TIME PERFORMANCE...THE TF34-100 TURBOFAN ENGINE UTILIZING REAL-TIME PERFORMANCE DATA THESIS Presented to the Faculty Department of Systems Engineering and...036 A METHOD TO PREDICT COMPRESSOR STALL IN THE TF34-100 TURBOFAN ENGINE UTILIZING REAL-TIME PERFORMANCE DATA Shuxiang ‘Albert’ Li, BS

  7. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE PAGES

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    2017-04-17

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  8. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  9. Optimal Control of a Surge-Mode WEC in Random Waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertok, Allan; Ceberio, Olivier; Staby, Bill

    2016-08-30

    The objective of this project was to develop one or more real-time feedback and feed-forward (MPC) control algorithms for an Oscillating Surge Wave Converter (OSWC) developed by RME called SurgeWEC™ that leverages recent innovations in wave energy converter (WEC) control theory to maximize power production in random wave environments. The control algorithms synthesized innovations in dynamic programming and nonlinear wave dynamics using anticipatory wave sensors and localized sensor measurements; e.g. position and velocity of the WEC Power Take Off (PTO), with predictive wave forecasting data. The result was an advanced control system that uses feedback or feed-forward data from anmore » array of sensor channels comprised of both localized and deployed sensors fused into a single decision process that optimally compensates for uncertainties in the system dynamics, wave forecasts, and sensor measurement errors.« less

  10. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  11. Visual Perceptual Learning and Models.

    PubMed

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  12. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  13. Multiple tipping points and optimal repairing in interacting networks

    PubMed Central

    Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo

    2016-01-01

    Systems composed of many interacting dynamical networks—such as the human body with its biological networks or the global economic network consisting of regional clusters—often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two ‘forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803

  14. Conjunctively optimizing flash flood control and water quality in urban water reservoirs by model predictive control and dynamic emulation

    NASA Astrophysics Data System (ADS)

    Galelli, Stefano; Goedbloed, Albert; Schmitter, Petra; Castelletti, Andrea

    2014-05-01

    Urban water reservoirs are a viable adaptation option to account for increasing drinking water demand of urbanized areas as they allow storage and re-use of water that is normally lost. In addition, the direct availability of freshwater reduces pumping costs and diversifies the portfolios of drinking water supply. Yet, these benefits have an associated twofold cost. Firstly, the presence of large, impervious areas increases the hydraulic efficiency of urban catchments, with short time of concentration, increased runoff rates, losses of infiltration and baseflow, and higher risk of flash floods. Secondly, the high concentration of nutrients and sediments characterizing urban discharges is likely to cause water quality problems. In this study we propose a new control scheme combining Model Predictive Control (MPC), hydro-meteorological forecasts and dynamic model emulation to design real-time operating policies that conjunctively optimize water quantity and quality targets. The main advantage of this scheme stands in its capability of exploiting real-time hydro-meteorological forecasts, which are crucial in such fast-varying systems. In addition, the reduced computational requests of the MPC scheme allows coupling it with dynamic emulators of water quality processes. The approach is demonstrated on Marina Reservoir, a multi-purpose reservoir located in the heart of Singapore and characterized by a large, highly urbanized catchment with a short (i.e. approximately one hour) time of concentration. Results show that the MPC scheme, coupled with a water quality emulator, provides a good compromise between different operating objectives, namely flood risk reduction, drinking water supply and salinity control. Finally, the scheme is used to assess the effect of source control measures (e.g. green roofs) aimed at restoring the natural hydrological regime of Marina Reservoir catchment.

  15. Approaches to drug therapy for COPD in Russia: a proposed therapeutic algorithm.

    PubMed

    Zykov, Kirill A; Ovcharenko, Svetlana I

    2017-01-01

    Until recently, there have been few clinical algorithms for the management of patients with COPD. Current evidence-based clinical management guidelines can appear to be complex, and they lack clear step-by-step instructions. For these reasons, we chose to create a simple and practical clinical algorithm for the management of patients with COPD, which would be applicable to real-world clinical practice, and which was based on clinical symptoms and spirometric parameters that would take into account the pathophysiological heterogeneity of COPD. This optimized algorithm has two main fields, one for nonspecialist treatment by primary care and general physicians and the other for treatment by specialized pulmonologists. Patients with COPD are treated with long-acting bronchodilators and short-acting drugs on a demand basis. If the forced expiratory volume in one second (FEV 1 ) is ≥50% of predicted and symptoms are mild, treatment with a single long-acting muscarinic antagonist or long-acting beta-agonist is proposed. When FEV 1 is <50% of predicted and/or the COPD assessment test score is ≥10, the use of combined bronchodilators is advised. If there is no response to treatment after three months, referral to a pulmonary specialist is recommended for pathophysiological endotyping: 1) eosinophilic endotype with peripheral blood or sputum eosinophilia >3%; 2) neutrophilic endotype with peripheral blood neutrophilia >60% or green sputum; or 3) pauci-granulocytic endotype. It is hoped that this simple, optimized, step-by-step algorithm will help to individualize the treatment of COPD in real-world clinical practice. This algorithm has yet to be evaluated prospectively or by comparison with other COPD management algorithms, including its effects on patient treatment outcomes. However, it is hoped that this algorithm may be useful in daily clinical practice for physicians treating patients with COPD in Russia.

  16. Intelligent and robust optimization frameworks for smart grids

    NASA Astrophysics Data System (ADS)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.

  17. A virtual test system representing the distribution of pedestrian impact configurations for future vehicle front-end optimization.

    PubMed

    Li, Guibing; Yang, Jikuang; Simms, Ciaran

    2016-07-03

    The purpose of this study is to define a computationally efficient virtual test system (VTS) to assess the aggressivity of vehicle front-end designs to pedestrians considering the distribution of pedestrian impact configurations for future vehicle front-end optimization. The VTS should represent real-world impact configurations in terms of the distribution of vehicle impact speeds, pedestrian walking speeds, pedestrian gait, and pedestrian height. The distribution of injuries as a function of body region, vehicle impact speed, and pedestrian size produced using this VTS should match the distribution of injuries observed in the accident data. The VTS should have the predictive ability to distinguish the aggressivity of different vehicle front-end designs to pedestrians. The proposed VTS includes 2 parts: a simulation test sample (STS) and an injury weighting system (IWS). The STS was defined based on MADYMO multibody vehicle to pedestrian impact simulations accounting for the range of vehicle impact speeds, pedestrian heights, pedestrian gait, and walking speed to represent real world impact configurations using the Pedestrian Crash Data Study (PCDS) and anthropometric data. In total 1,300 impact configurations were accounted for in the STS. Three vehicle shapes were then tested using the STS. The IWS was developed to weight the predicted injuries in the STS using the estimated proportion of each impact configuration in the PCDS accident data. A weighted injury number (WIN) was defined as the resulting output of the VTS. The WIN is the weighted number of average Abbreviated Injury Scale (AIS) 2+ injuries recorded per impact simulation in the STS. Then the predictive capability of the VTS was evaluated by comparing the distributions of AIS 2+ injuries to different pedestrian body regions and heights, as well as vehicle types and impact speeds, with that from the PCDS database. Further, a parametric analysis was performed with the VTS to assess the sensitivity of the injury predictions to changes in vehicle shape (type) and stiffness to establish the potential for using the VTS for future vehicle front-end optimization. An STS of 1,300 multibody simulations and an IWS based on the distribution of impact speed, pedestrian height, gait stance, and walking speed is broadly capable of predicting the distribution of pedestrian injuries observed in the PCDS database when the same vehicle type distribution as the accident data is employed. The sensitivity study shows significant variations in the WIN when either vehicle type or stiffness is altered. Injury predictions derived from the VTS give a good representation of the distribution of injuries observed in the PCDS and distinguishing ability on the aggressivity of vehicle front-end designs to pedestrians. The VTS can be considered as an effective approach for assessing pedestrian safety performance of vehicle front-end designs at the generalized level. However, the absolute injury number is substantially underpredicted by the VTS, and this needs further development.

  18. Flash flood prediction in large dams using neural networks

    NASA Astrophysics Data System (ADS)

    Múnera Estrada, J. C.; García Bartual, R.

    2009-04-01

    A flow forecasting methodology is presented as a support tool for flood management in large dams. The practical and efficient use of hydrological real-time measurements is necessary to operate early warning systems for flood disasters prevention, either in natural catchments or in those regulated with reservoirs. In this latter case, the optimal dam operation during flood scenarios should reduce the downstream risks, and at the same time achieve a compromise between different goals: structural security, minimize predictions uncertainty and water resources system management objectives. Downstream constraints depend basically on the geomorphology of the valley, the critical flow thresholds for flooding, the land use and vulnerability associated with human settlements and their economic activities. A dam operation during a flood event thus requires appropriate strategies depending on the flood magnitude and the initial freeboard at the reservoir. The most important difficulty arises from the inherently stochastic character of peak rainfall intensities, their strong spatial and temporal variability, and the highly nonlinear response of semiarid catchments resulting from initial soil moisture condition and the dominant flow mechanisms. The practical integration of a flow prediction model in a real-time system should include combined techniques of pre-processing, data verification and completion, assimilation of information and implementation of real time filters depending on the system characteristics. This work explores the behaviour of real-time flood forecast algorithms based on artificial neural networks (ANN) techniques, in the River Meca catchment (Huelva, Spain), regulated by El Sancho dam. The dam is equipped with three Taintor gates of 12x6 meters. The hydrological data network includes five high-resolution automatic pluviometers (dt=10 min) and three high precision water level sensors in the reservoir. A cross correlation analysis between precipitation data and inflows was previously performed for several historical events. Optimal time lags were found to be in the range of 2 to 6 hours, depending on the event. On the other hand, the flow autocorrelation analysis shows an average correlation of 0.50 for a lag=5 hours, and 0.40 for a lag= 6 hours, suggesting a reasonable prediction horizon. The proposed forecasting methodology includes the on line time series historical reconstruction of the average rainfall in the catchment by the Thiessen polygons method, and the inflow estimation through the mass balance in the reservoir, while output flows derive from the hydraulics of the gates. The future values of inflows are predicted with an ANN model. This technique was chosen because of the general good ability shown by ANN in a number of publications, and due to its very high computational efficiency. Several ANN models architectures have been evaluated and compared. In all cases, input variables are average hourly flows and rainfalls in the catchments with different time delays, according to the forecasting horizon. Also the immediate future precipitation from an outside weather model is processed. The prediction horizon has been set to 3 hours, although results show that it could be extended a few extra hours if the external precipitation forecasts were reliable enough. All the ANN models analyzed have a very simple architecture based on the conventional Three Layer Feed Forward Perceptron, with a variable number of hidden nodes and one single node in the output layer producing the next hour flow value. For the following time steps, a serial-propagated neural networks structure scheme is used, following the strategy suggested by F. Chang J. et al (2007). The ANN models have been compared using the root mean square error (RMSE) and the Nash-Sutcliffe efficiency (NSE) statistical indices. The best model among all was chosen and implemented. Quality of predictions has been found to be strongly affected by reliability of rainfall predictions, in particular when it is overestimated, and not so much when it is underestimated. To reduce such sensitivity, a new model was proposed eliminating completely predicted rainfalls in the input set. Although results are slightly poorer, NSE index reveals a satisfactory performance in the validation set (0.80). The robustness and simplicity of ANN schemes makes them particularly appropriate in real-time systems, as they can easily be integrated and programmed, handling well the presence of possible errors and uncertainties in data. On the other hand, they are computationally very efficient, and over all, they are easily updated without changing the general conception and operation of the real-time decision making support tool.

  19. RaptorX-Angle: real-value prediction of protein backbone dihedral angles through a hybrid method of clustering and deep learning.

    PubMed

    Gao, Yujuan; Wang, Sheng; Deng, Minghua; Xu, Jinbo

    2018-05-08

    Protein dihedral angles provide a detailed description of protein local conformation. Predicted dihedral angles can be used to narrow down the conformational space of the whole polypeptide chain significantly, thus aiding protein tertiary structure prediction. However, direct angle prediction from sequence alone is challenging. In this article, we present a novel method (named RaptorX-Angle) to predict real-valued angles by combining clustering and deep learning. Tested on a subset of PDB25 and the targets in the latest two Critical Assessment of protein Structure Prediction (CASP), our method outperforms the existing state-of-art method SPIDER2 in terms of Pearson Correlation Coefficient (PCC) and Mean Absolute Error (MAE). Our result also shows approximately linear relationship between the real prediction errors and our estimated bounds. That is, the real prediction error can be well approximated by our estimated bounds. Our study provides an alternative and more accurate prediction of dihedral angles, which may facilitate protein structure prediction and functional study.

  20. The string prediction models as invariants of time series in the forex market

    NASA Astrophysics Data System (ADS)

    Pincak, R.

    2013-12-01

    In this paper we apply a new approach of string theory to the real financial market. The models are constructed with an idea of prediction models based on the string invariants (PMBSI). The performance of PMBSI is compared to support vector machines (SVM) and artificial neural networks (ANN) on an artificial and a financial time series. A brief overview of the results and analysis is given. The first model is based on the correlation function as invariant and the second one is an application based on the deviations from the closed string/pattern form (PMBCS). We found the difference between these two approaches. The first model cannot predict the behavior of the forex market with good efficiency in comparison with the second one which is, in addition, able to make relevant profit per year. The presented string models could be useful for portfolio creation and financial risk management in the banking sector as well as for a nonlinear statistical approach to data optimization.

  1. Reference governors for controlled belt restraint systems

    NASA Astrophysics Data System (ADS)

    van der Laan, E. P.; Heemels, W. P. M. H.; Luijten, H.; Veldpaus, F. E.; Steinbuch, M.

    2010-07-01

    Today's restraint systems typically include a number of airbags, and a three-point seat belt with load limiter and pretensioner. For the class of real-time controlled restraint systems, the restraint actuator settings are continuously manipulated during the crash. This paper presents a novel control strategy for these systems. The control strategy developed here is based on a combination of model predictive control and reference management, in which a non-linear device - a reference governor (RG) - is added to a primal closed-loop controlled system. This RG determines an optimal setpoint in terms of injury reduction and constraint satisfaction by solving a constrained optimisation problem. Prediction of the vehicle motion, required to predict future constraint violation, is included in the design and is based on past crash data, using linear regression techniques. Simulation results with MADYMO models show that, with ideal sensors and actuators, a significant reduction (45%) of the peak chest acceleration can be achieved, without prior knowledge of the crash. Furthermore, it is shown that the algorithms are sufficiently fast to be implemented online.

  2. Adaptive on-line prediction of the available power of lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Waag, Wladislaw; Fleischer, Christian; Sauer, Dirk Uwe

    2013-11-01

    In this paper a new approach for prediction of the available power of a lithium-ion battery pack is presented. It is based on a nonlinear battery model that includes current dependency of the battery resistance. It results in an accurate power prediction not only at room temperature, but also at lower temperatures at which the current dependency is substantial. The used model parameters are fully adaptable on-line to the given state of the battery (state of charge, state of health, temperature). This on-line adaption in combination with an explicit consideration of differences between characteristics of individual cells in a battery pack ensures an accurate power prediction under all possible conditions. The proposed trade-off between the number of used cell parameters and the total accuracy as well as the optimized algorithm results in a real-time capability of the method, which is demonstrated on a low-cost 16 bit microcontroller. The verification tests performed on a software-in-the-loop test bench system with four 40 Ah lithium-ion cells show promising results.

  3. Quantitative structure-property relationship modeling of remote liposome loading of drugs.

    PubMed

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-06-10

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a data set including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and 5-fold external validation. The external prediction accuracy for binary models was as high as 91-96%; for continuous models the mean coefficient R(2) for regression between predicted versus observed values was 0.76-0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Optimal colour quality of LED clusters based on memory colours.

    PubMed

    Smet, Kevin; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter

    2011-03-28

    The spectral power distributions of tri- and tetrachromatic clusters of Light-Emitting-Diodes, composed of simulated and commercially available LEDs, were optimized with a genetic algorithm to maximize the luminous efficacy of radiation and the colour quality as assessed by the memory colour quality metric developed by the authors. The trade-off of the colour quality as assessed by the memory colour metric and the luminous efficacy of radiation was investigated by calculating the Pareto optimal front using the NSGA-II genetic algorithm. Optimal peak wavelengths and spectral widths of the LEDs were derived, and over half of them were found to be close to Thornton's prime colours. The Pareto optimal fronts of real LED clusters were always found to be smaller than those of the simulated clusters. The effect of binning on designing a real LED cluster was investigated and was found to be quite large. Finally, a real LED cluster of commercially available AlGaInP, InGaN and phosphor white LEDs was optimized to obtain a higher score on memory colour quality scale than its corresponding CIE reference illuminant.

  5. Protein folding optimization based on 3D off-lattice model via an improved artificial bee colony algorithm.

    PubMed

    Li, Bai; Lin, Mu; Liu, Qiao; Li, Ya; Zhou, Changjun

    2015-10-01

    Protein folding is a fundamental topic in molecular biology. Conventional experimental techniques for protein structure identification or protein folding recognition require strict laboratory requirements and heavy operating burdens, which have largely limited their applications. Alternatively, computer-aided techniques have been developed to optimize protein structures or to predict the protein folding process. In this paper, we utilize a 3D off-lattice model to describe the original protein folding scheme as a simplified energy-optimal numerical problem, where all types of amino acid residues are binarized into hydrophobic and hydrophilic ones. We apply a balance-evolution artificial bee colony (BE-ABC) algorithm as the minimization solver, which is featured by the adaptive adjustment of search intensity to cater for the varying needs during the entire optimization process. In this work, we establish a benchmark case set with 13 real protein sequences from the Protein Data Bank database and evaluate the convergence performance of BE-ABC algorithm through strict comparisons with several state-of-the-art ABC variants in short-term numerical experiments. Besides that, our obtained best-so-far protein structures are compared to the ones in comprehensive previous literature. This study also provides preliminary insights into how artificial intelligence techniques can be applied to reveal the dynamics of protein folding. Graphical Abstract Protein folding optimization using 3D off-lattice model and advanced optimization techniques.

  6. Combining the ASA Physical Classification System and Continuous Intraoperative Surgical Apgar Score Measurement in Predicting Postoperative Risk.

    PubMed

    Jering, Monika Zdenka; Marolen, Khensani N; Shotwell, Matthew S; Denton, Jason N; Sandberg, Warren S; Ehrenfeld, Jesse Menachem

    2015-11-01

    The surgical Apgar score predicts major 30-day postoperative complications using data assessed at the end of surgery. We hypothesized that evaluating the surgical Apgar score continuously during surgery may identify patients at high risk for postoperative complications. We retrospectively identified general, vascular, and general oncology patients at Vanderbilt University Medical Center. Logistic regression methods were used to construct a series of predictive models in order to continuously estimate the risk of major postoperative complications, and to alert care providers during surgery should the risk exceed a given threshold. Area under the receiver operating characteristic curve (AUROC) was used to evaluate the discriminative ability of a model utilizing a continuously measured surgical Apgar score relative to models that use only preoperative clinical factors or continuously monitored individual constituents of the surgical Apgar score (i.e. heart rate, blood pressure, and blood loss). AUROC estimates were validated internally using a bootstrap method. 4,728 patients were included. Combining the ASA PS classification with continuously measured surgical Apgar score demonstrated improved discriminative ability (AUROC 0.80) in the pooled cohort compared to ASA (0.73) and the surgical Apgar score alone (0.74). To optimize the tradeoff between inadequate and excessive alerting with future real-time notifications, we recommend a threshold probability of 0.24. Continuous assessment of the surgical Apgar score is predictive for major postoperative complications. In the future, real-time notifications might allow for detection and mitigation of changes in a patient's accumulating risk of complications during a surgical procedure.

  7. Feasibility of predicting tumor motion using online data acquired during treatment and a generalized neural network optimized with offline patient tumor trajectories.

    PubMed

    Teo, Troy P; Ahmed, Syed Bilal; Kawalec, Philip; Alayoubi, Nadia; Bruce, Neil; Lyn, Ethan; Pistorius, Stephen

    2018-02-01

    The accurate prediction of intrafraction lung tumor motion is required to compensate for system latency in image-guided adaptive radiotherapy systems. The goal of this study was to identify an optimal prediction model that has a short learning period so that prediction and adaptation can commence soon after treatment begins, and requires minimal reoptimization for individual patients. Specifically, the feasibility of predicting tumor position using a combination of a generalized (i.e., averaged) neural network, optimized using historical patient data (i.e., tumor trajectories) obtained offline, coupled with the use of real-time online tumor positions (obtained during treatment delivery) was examined. A 3-layer perceptron neural network was implemented to predict tumor motion for a prediction horizon of 650 ms. A backpropagation algorithm and batch gradient descent approach were used to train the model. Twenty-seven 1-min lung tumor motion samples (selected from a CyberKnife patient dataset) were sampled at a rate of 7.5 Hz (0.133 s) to emulate the frame rate of an electronic portal imaging device (EPID). A sliding temporal window was used to sample the data for learning. The sliding window length was set to be equivalent to the first breathing cycle detected from each trajectory. Performing a parametric sweep, an averaged error surface of mean square errors (MSE) was obtained from the prediction responses of seven trajectories used for the training of the model (Group 1). An optimal input data size and number of hidden neurons were selected to represent the generalized model. To evaluate the prediction performance of the generalized model on unseen data, twenty tumor traces (Group 2) that were not involved in the training of the model were used for the leave-one-out cross-validation purposes. An input data size of 35 samples (4.6 s) and 20 hidden neurons were selected for the generalized neural network. An average sliding window length of 28 data samples was used. The average initial learning period prior to the availability of the first predicted tumor position was 8.53 ± 1.03 s. Average mean absolute error (MAE) of 0.59 ± 0.13 mm and 0.56 ± 0.18 mm were obtained from Groups 1 and 2, respectively, giving an overall MAE of 0.57 ± 0.17 mm. Average root-mean-square-error (RMSE) of 0.67 ± 0.36 for all the traces (0.76 ± 0.34 mm, Group 1 and 0.63 ± 0.36 mm, Group 2), is comparable to previously published results. Prediction errors are mainly due to the irregular periodicities between cycles. Since the errors from Groups 1 and 2 are within the same range, it demonstrates that this model can generalize and predict on unseen data. This is a first attempt to use an averaged MSE error surface (obtained from the prediction of different patients' tumor trajectories) to determine the parameters of a generalized neural network. This network could be deployed as a plug-and-play predictor for tumor trajectory during treatment delivery, eliminating the need for optimizing individual networks with pretreatment patient data. © 2017 American Association of Physicists in Medicine.

  8. An Optimization Framework for Dynamic, Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara

    2003-01-01

    Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.

  9. Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2001-01-01

    A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.

  10. Surgery scheduling optimization considering real life constraints and comprehensive operation cost of operating room.

    PubMed

    Xiang, Wei; Li, Chong

    2015-01-01

    Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.

  11. Effective learning strategies for real-time image-guided adaptive control of multiple-source hyperthermia applicators.

    PubMed

    Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva

    2010-03-01

    This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.

  12. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Predictable and Adaptable Complex Real-Time Systems

    DTIC Science & Technology

    1993-09-30

    Predictable and Adaptable Complex Real - Time Systems Grant or Contract Number: N00014-92-J-1048 Reporting Period: 1 Oct 91 - 30 Sep 93 1... Real - Time Systems Grant or Contract Number: N00014-92-J-1048 Reporting Period: 1 Oct 91 - 30 Sep 93 2. Summary of Technical Progress Our...cs.umass.edu Grant or Contract Title: Predictable and Adaptable Complex Real - Time Systems Grant or Contract Number: N00014-92-J-1048 Reporting Period: 1 Oct 91

  14. One-degree-of-freedom spherical model for the passive motion of the human ankle joint.

    PubMed

    Sancisi, Nicola; Baldisserri, Benedetta; Parenti-Castelli, Vincenzo; Belvedere, Claudio; Leardini, Alberto

    2014-04-01

    Mathematical modelling of mobility at the human ankle joint is essential for prosthetics and orthotic design. The scope of this study is to show that the ankle joint passive motion can be represented by a one-degree-of-freedom spherical motion. Moreover, this motion is modelled by a one-degree-of-freedom spherical parallel mechanism model, and the optimal pivot-point position is determined. Passive motion and anatomical data were taken from in vitro experiments in nine lower limb specimens. For each of these, a spherical mechanism, including the tibiofibular and talocalcaneal segments connected by a spherical pair and by the calcaneofibular and tibiocalcaneal ligament links, was defined from the corresponding experimental kinematics and geometry. An iterative procedure was used to optimize the geometry of the model, able to predict original experimental motion. The results of the simulations showed a good replication of the original natural motion, despite the numerous model assumptions and simplifications, with mean differences between experiments and predictions smaller than 1.3 mm (average 0.33 mm) for the three joint position components and smaller than 0.7° (average 0.32°) for the two out-of-sagittal plane rotations, once plotted versus the full flexion arc. The relevant pivot-point position after model optimization was found within the tibial mortise, but not exactly in a central location. The present combined experimental and modelling analysis of passive motion at the human ankle joint shows that a one degree-of-freedom spherical mechanism predicts well what is observed in real joints, although its computational complexity is comparable to the standard hinge joint model.

  15. Neural network river forecasting through baseflow separation and binary-coded swarm optimization

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing; Sivakumar, Bellie

    2015-10-01

    The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible.

  16. Event Oriented Design and Adaptive Multiprocessing

    DTIC Science & Technology

    1991-08-31

    System 5 2.3 The Classification 5 2.4 Real-Time Systems 7 2.5 Non Real-Time Systems 10 2.6 Common Characterizations of all Software Systems 10 2.7... Non -Optimal Guarantee Test Theorem 37 6.3.2 Chetto’s Optimal Guarantee Test Theorem 37 6.3.3 Multistate Case: An Extended Guarantee 39 Test Theorem...which subdivides all software systems according to the way in which they operate, such as interactive, non interactive, real-time, etc. Having defined

  17. Online gaming for learning optimal team strategies in real time

    NASA Astrophysics Data System (ADS)

    Hudas, Gregory; Lewis, F. L.; Vamvoudakis, K. G.

    2010-04-01

    This paper first presents an overall view for dynamical decision-making in teams, both cooperative and competitive. Strategies for team decision problems, including optimal control, zero-sum 2-player games (H-infinity control) and so on are normally solved for off-line by solving associated matrix equations such as the Riccati equation. However, using that approach, players cannot change their objectives online in real time without calling for a completely new off-line solution for the new strategies. Therefore, in this paper we give a method for learning optimal team strategies online in real time as team dynamical play unfolds. In the linear quadratic regulator case, for instance, the method learns the Riccati equation solution online without ever solving the Riccati equation. This allows for truly dynamical team decisions where objective functions can change in real time and the system dynamics can be time-varying.

  18. Retreatment Predictions in Odontology by means of CBR Systems.

    PubMed

    Campo, Livia; Aliaga, Ignacio J; De Paz, Juan F; García, Alvaro Enrique; Bajo, Javier; Villarubia, Gabriel; Corchado, Juan M

    2016-01-01

    The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising.

  19. Retreatment Predictions in Odontology by means of CBR Systems

    PubMed Central

    Campo, Livia; Aliaga, Ignacio J.; García, Alvaro Enrique; Villarubia, Gabriel; Corchado, Juan M.

    2016-01-01

    The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising. PMID:26884749

  20. Surflex-Dock: Docking benchmarks and real-world application

    NASA Astrophysics Data System (ADS)

    Spitzer, Russell; Jain, Ajay N.

    2012-06-01

    Benchmarks for molecular docking have historically focused on re-docking the cognate ligand of a well-determined protein-ligand complex to measure geometric pose prediction accuracy, and measurement of virtual screening performance has been focused on increasingly large and diverse sets of target protein structures, cognate ligands, and various types of decoy sets. Here, pose prediction is reported on the Astex Diverse set of 85 protein ligand complexes, and virtual screening performance is reported on the DUD set of 40 protein targets. In both cases, prepared structures of targets and ligands were provided by symposium organizers. The re-prepared data sets yielded results not significantly different than previous reports of Surflex-Dock on the two benchmarks. Minor changes to protein coordinates resulting from complex pre-optimization had large effects on observed performance, highlighting the limitations of cognate ligand re-docking for pose prediction assessment. Docking protocols developed for cross-docking, which address protein flexibility and produce discrete families of predicted poses, produced substantially better performance for pose prediction. Performance on virtual screening performance was shown to benefit by employing and combining multiple screening methods: docking, 2D molecular similarity, and 3D molecular similarity. In addition, use of multiple protein conformations significantly improved screening enrichment.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Brennan T; Jager, Yetta; March, Patrick

    Reservoir releases are typically operated to maximize the efficiency of hydropower production and the value of hydropower produced. In practice, ecological considerations are limited to those required by law. We first describe reservoir optimization methods that include mandated constraints on environmental and other water uses. Next, we describe research to formulate and solve reservoir optimization problems involving both energy and environmental water needs as objectives. Evaluating ecological objectives is a challenge in these problems for several reasons. First, it is difficult to predict how biological populations will respond to flow release patterns. This problem can be circumvented by using ecologicalmore » models. Second, most optimization methods require complex ecological responses to flow to be quantified by a single metric, preferably a currency that can also represent hydropower benefits. Ecological valuation of instream flows can make optimization methods that require a single currency for the effects of flow on energy and river ecology possible. Third, holistic reservoir optimization problems are unlikely to be structured such that simple solution methods can be used, necessitating the use of flexible numerical methods. One strong advantage of optimal control is the ability to plan for the effects of climate change. We present ideas for developing holistic methods to the point where they can be used for real-time operation of reservoirs. We suggest that developing ecologically sound optimization tools should be a priority for hydropower in light of the increasing value placed on sustaining both the ecological and energy benefits of riverine ecosystems long into the future.« less

  2. Instrumentation for optimizing an underground coal-gasification process

    NASA Astrophysics Data System (ADS)

    Seabaugh, W.; Zielinski, R. E.

    1982-06-01

    While the United States has a coal resource base of 6.4 trillion tons, only seven percent is presently recoverable by mining. The process of in-situ gasification can recover another twenty-eight percent of the vast resource, however, viable technology must be developed for effective in-situ recovery. The key to this technology is system that can optimize and control the process in real-time. An instrumentation system is described that optimizes the composition of the injection gas, controls the in-situ process and conditions the product gas for maximum utilization. The key elements of this system are Monsanto PRISM Systems, a real-time analytical system, and a real-time data acquisition and control system. This system provides from complete automation of the process but can easily be overridden by manual control. The use of this cost effective system can provide process optimization and is an effective element in developing a viable in-situ technology.

  3. An Optimal Static Scheduling Algorithm for Hard Real-Time Systems Specified in a Prototyping Language

    DTIC Science & Technology

    1989-12-01

    to construct because the mechanism is a dispatching procedure. Since all nonpreemptive schedules are contained in the set of all preemptive schedules...the optimal value of T’.. in the preemptive case is at least a lower bound on the optimal T., for the nonpreemptive schedules. This principle is the...adapt to changes in the enviro.nment. In hard real-time systems, tasks are also distinguished as preemptable and nonpreemptable . A task is preemptable

  4. Calibrating emergent phenomena in stock markets with agent based models

    PubMed Central

    Sornette, Didier

    2018-01-01

    Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data. PMID:29499049

  5. Optimal Reference Gene Selection for Expression Studies in Human Reticulocytes.

    PubMed

    Aggarwal, Anu; Jamwal, Manu; Viswanathan, Ganesh K; Sharma, Prashant; Sachdeva, ManUpdesh S; Bansal, Deepak; Malhotra, Pankaj; Das, Reena

    2018-05-01

    Reference genes are indispensable for normalizing mRNA levels across samples in real-time quantitative PCR. Their expression levels vary under different experimental conditions and because of several inherent characteristics. Appropriate reference gene selection is thus critical for gene-expression studies. This study aimed at selecting optimal reference genes for gene-expression analysis of reticulocytes and at validating them in hereditary spherocytosis (HS) and β-thalassemia intermedia (βTI) patients. Seven reference genes (PGK1, MPP1, HPRT1, ACTB, GAPDH, RN18S1, and SDHA) were selected because of published reports. Real-time quantitative PCR was performed on reticulocytes in 20 healthy volunteers, 15 HS patients, and 10 βTI patients. Threshold cycle values were compared with fold-change method and RefFinder software. The stable reference genes recommended by RefFinder were validated with SLC4A1 and flow cytometric eosin-5'-maleimide binding assay values in HS patients and HBG2 and high performance liquid chromatography-derived percentage of hemoglobin F in βTI. Comprehensive ranking predicted MPP1 and GAPDH as optimal reference genes for reticulocytes that were not affected in HS and βTI. This was further confirmed on validation with eosin-5'-maleimide results and percentage of hemoglobin F in HS and βTI patients, respectively. Hence, MPP1 and GAPDH are good reference genes for reticulocyte expression studies compared with ACTB and RN18S1, the two most commonly used reference genes. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  6. Calibrating emergent phenomena in stock markets with agent based models.

    PubMed

    Fievet, Lucas; Sornette, Didier

    2018-01-01

    Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data.

  7. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  8. Connectivity-enhanced route selection and adaptive control for the Chevrolet Volt

    DOE PAGES

    Gonder, Jeffrey; Wood, Eric; Rajagopalan, Sai

    2016-01-01

    The National Renewable Energy Laboratory and General Motors evaluated connectivity-enabled efficiency enhancements for the Chevrolet Volt. A high-level model was developed to predict vehicle fuel and electricity consumption based on driving characteristics and vehicle state inputs. These techniques were leveraged to optimize energy efficiency via green routing and intelligent control mode scheduling, which were evaluated using prospective driving routes between tens of thousands of real-world origin/destination pairs. The overall energy savings potential of green routing and intelligent mode scheduling was estimated at 5% and 3%, respectively. Furthermore, these represent substantial opportunities considering that they only require software adjustments to implement.

  9. Criticality of Adaptive Control Dynamics

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Pawelzik, Klaus

    2011-12-01

    We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.

  10. Lessons Learned from AIRS: Improved Determination of Surface and Atmospheric Temperatures Using Only Shortwave AIRS Channels

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2011-01-01

    This slide presentation reviews the use of shortwave channels available to the Atmospheric Infrared Sounder (AIRS) to improve the determination of surface and atmospheric temperatures. The AIRS instrument is compared with the Infrared Atmospheric Sounding Interferometer (IASI) on-board the MetOp-A satellite. The objectives of the AIRS/AMSU were to (1) provide real time observations to improve numerical weather prediction via data assimilation, (2) Provide observations to measure and explain interannual variability and trends and (3) Use of AIRS product error estimates allows for QC optimized for each application. Successive versions in the AIRS retrieval methodology have shown significant improvement.

  11. An improved grey model for the prediction of real-time GPS satellite clock bias

    NASA Astrophysics Data System (ADS)

    Zheng, Z. Y.; Chen, Y. Q.; Lu, X. S.

    2008-07-01

    In real-time GPS precise point positioning (PPP), real-time and reliable satellite clock bias (SCB) prediction is a key to implement real-time GPS PPP. It is difficult to hold the nuisance and inenarrable performance of space-borne GPS satellite atomic clock because of its high-frequency, sensitivity and impressionable, it accords with the property of grey model (GM) theory, i. e. we can look on the variable process of SCB as grey system. Firstly, based on limits of quadratic polynomial (QP) and traditional GM to predict SCB, a modified GM (1,1) is put forward to predict GPS SCB in this paper; and then, taking GPS SCB data for example, we analyzed clock bias prediction with different sample interval, the relationship between GM exponent and prediction accuracy, precision comparison of GM to QP, and concluded the general rule of different type SCB and GM exponent; finally, to test the reliability and validation of the modified GM what we put forward, taking IGS clock bias ephemeris product as reference, we analyzed the prediction precision with the modified GM, It is showed that the modified GM is reliable and validation to predict GPS SCB and can offer high precise SCB prediction for real-time GPS PPP.

  12. WE-AB-303-06: Combining DAO with MV + KV Optimization to Improve Skin Dose Sparing with Real-Time Fluoroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grelewicz, Z; Wiersma, R

    Purpose: Real-time fluoroscopy may allow for improved patient positioning and tumor tracking, particularly in the treatment of lung tumors. In order to mitigate the effects of the imaging dose, previous studies have demonstrated the effect of including both imaging dose and imaging constraints into the inverse treatment planning object function. That method of combined MV+kV optimization may Result in plans with treatment beams chosen to allow for more gentle imaging beam-on times. Direct-aperture optimization (DAO) is also known to produce treatment plans with fluence maps more conducive to lower beam-on times. Therefore, in this work we demonstrate the feasibility ofmore » a combination of DAO and MV+kV optimization for further optimized real-time kV imaging. Methods: Therapeutic and imaging beams were modeled in the EGSnrc Monte Carlo environment, and applied to a patient model for a previously treated lung patient to provide dose influence matrices from DOSXYZnrc. An MV + kV IMRT DAO treatment planning system was developed to compare DAO treatment plans with and without MV+kV optimization. The objective function was optimized using simulated annealing. In order to allow for comparisons between different cases of the stochastically optimized plans, the optimization was repeated twenty times. Results: Across twenty optimizations, combined MV+kV IMRT resulted in an average of 12.8% reduction in peak skin dose. Both non-optimized and MV+kV optimized imaging beams delivered, on average, mean dose of approximately 1 cGy per fraction to the target, with peak doses to target of approximately 6 cGy per fraction. Conclusion: When using DAO, MV+kV optimization is shown to Result in improvements to plan quality in terms of skin dose, when compared to the case of MV optimization with non-optimized kV imaging. The combination of DAO and MV+kV optimization may allow for real-time imaging without excessive imaging dose. Financial support for the work has been provided in part by NIH Grant T32 EB002103, ACS RSG-13-313-01-CCE, and NIH S10 RR021039 and P30 CA14599 grants. The contents of this submission do not necessarily represent the official views of any of the supporting organizations.« less

  13. Optimized Delivery System Achieves Enhanced Endomyocardial Stem Cell Retention

    PubMed Central

    Behfar, Atta; Latere, Jean-Pierre; Bartunek, Jozef; Homsy, Christian; Daro, Dorothee; Crespo-Diaz, Ruben J.; Stalboerger, Paul G.; Steenwinckel, Valerie; Seron, Aymeric; Redfield, Margaret M.; Terzic, Andre

    2014-01-01

    Background Regenerative cell-based therapies are associated with limited myocardial retention of delivered stem cells. The objective of this study is to develop an endocardial delivery system for enhanced cell retention. Methods and Results Stem cell retention was simulated in silico using one and three-dimensional models of tissue distortion and compliance associated with delivery. Needle designs, predicted to be optimal, were accordingly engineered using nitinol – a nickel and titanium alloy displaying shape memory and super-elasticity. Biocompatibility was tested with human mesenchymal stem cells. Experimental validation was performed with species-matched cells directly delivered into Langendorff-perfused porcine hearts or administered percutaneously into the endocardium of infarcted pigs. Cell retention was quantified by flow cytometry and real time quantitative polymerase chain reaction methodology. Models, computing optimal distribution of distortion calibrated to favor tissue compliance, predicted that a 75°-curved needle featuring small-to-large graded side holes would ensure the highest cell retention profile. In isolated hearts, the nitinol curved needle catheter (C-Cath) design ensured 3-fold superior stem cell retention compared to a standard needle. In the setting of chronic infarction, percutaneous delivery of stem cells with C-Cath yielded a 37.7±7.1% versus 10.0±2.8% retention achieved with a traditional needle, without impact on biocompatibility or safety. Conclusions Modeling guided development of a nitinol-based curved needle delivery system with incremental side holes achieved enhanced myocardial stem cell retention. PMID:24326777

  14. Real-time Tsunami Inundation Prediction Using High Performance Computers

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the earthquake occurs took about 2 minutes, which would be sufficient for a practical tsunami inundation predictions. In the presentation, the computational performance of our faster-than-real-time tsunami inundation model will be shown, and preferable tsunami wave source analysis for an accurate inundation prediction will also be discussed.

  15. Ideal versus real: simulated annealing of experimentally derived and geometric platinum nanoparticles

    NASA Astrophysics Data System (ADS)

    Ellaby, Tom; Aarons, Jolyon; Varambhia, Aakash; Jones, Lewys; Nellist, Peter; Ozkaya, Dogan; Sarwar, Misbah; Thompsett, David; Skylaris, Chris-Kriton

    2018-04-01

    Platinum nanoparticles find significant use as catalysts in industrial applications such as fuel cells. Research into their design has focussed heavily on nanoparticle size and shape as they greatly influence activity. Using high throughput, high precision electron microscopy, the structures of commercially available Pt catalysts have been determined, and we have used classical and quantum atomistic simulations to examine and compare them with geometric cuboctahedral and truncated octahedral structures. A simulated annealing procedure was used both to explore the potential energy surface at different temperatures, and also to assess the effect on catalytic activity that annealing would have on nanoparticles with different geometries and sizes. The differences in response to annealing between the real and geometric nanoparticles are discussed in terms of thermal stability, coordination number and the proportion of optimal binding sites on the surface of the nanoparticles. We find that annealing both experimental and geometric nanoparticles results in structures that appear similar in shape and predicted activity, using oxygen adsorption as a measure. Annealing is predicted to increase the catalytic activity in all cases except the truncated octahedra, where it has the opposite effect. As our simulations have been performed with a classical force field, we also assess its suitability to describe the potential energy of such nanoparticles by comparing with large scale density functional theory calculations.

  16. The Case For Prediction-based Best-effort Real-time Systems.

    DTIC Science & Technology

    1999-01-01

    Real - time Systems Peter A. Dinda Loukas Kallivokas January...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited DTIG QUALBR DISSECTED X The Case For Prediction-based Best-effort Real - time Systems Peter...Mellon University Pittsburgh, PA 15213 A version of this paper appeared in the Seventh Workshop on Parallel and Distributed Real - Time Systems

  17. Safer passenger car front shapes for pedestrians: A computational approach to reduce overall pedestrian injury risk in realistic impact scenarios.

    PubMed

    Li, Guibing; Yang, Jikuang; Simms, Ciaran

    2017-03-01

    Vehicle front shape has a significant influence on pedestrian injuries and the optimal design for overall pedestrian protection remains an elusive goal, especially considering the variability of vehicle-to-pedestrian accident scenarios. Therefore this study aims to develop and evaluate an efficient framework for vehicle front shape optimization for pedestrian protection accounting for the broad range of real world impact scenarios and their distributions in recent accident data. Firstly, a framework for vehicle front shape optimization for pedestrian protection was developed based on coupling of multi-body simulations and a genetic algorithm. This framework was then applied for optimizing passenger car front shape for pedestrian protection, and its predictions were evaluated using accident data and kinematic analyses. The results indicate that the optimization shows a good convergence and predictions of the optimization framework are corroborated when compared to the available accident data, and the optimization framework can distinguish 'good' and 'poor' vehicle front shapes for pedestrian safety. Thus, it is feasible and reliable to use the optimization framework for vehicle front shape optimization for reducing overall pedestrian injury risk. The results also show the importance of considering the broad range of impact scenarios in vehicle front shape optimization. A safe passenger car for overall pedestrian protection should have a wide and flat bumper (covering pedestrians' legs from the lower leg up to the shaft of the upper leg with generally even contacts), a bonnet leading edge height around 750mm, a short bonnet (<800mm) with a shallow or steep angle (either >17° or <12°) and a shallow windscreen (≤30°). Sensitivity studies based on simulations at the population level indicate that the demands for a safe passenger car front shape for head and leg protection are generally consistent, but partially conflict with pelvis protection. In particular, both head and leg injury risk increase with increasing bumper lower height and depth, and decrease with increasing bonnet leading edge height, while pelvis injury risk increases with increasing bonnet leading edge height. However, the effects of bonnet leading edge height and windscreen design on head injury risk are complex and require further analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Individual Sawtooth Pacing by Synchronized ECCD in TCV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodman, T. P.; Felici, F.; Canal, G.

    2011-12-23

    Previous real-time sawtooth control scenarios using EC actuators have attempted to shorten or lengthen the sawtooth period by optimally positioning the EC absorption near the q = 1 surface. In new experiments we demonstrate for the first time that individual sawtooth crashes can be repetitively induced at predictable times by reducing the stabilizing ECCD power after a predetermined time from the preceding crash. Other stabilizing actuators (e.g. ICRF, NBI) are expected to produce similar effects. Armed with these results, we present a new sawtooth / NTM control paradigm for improved performance in burning plasmas. The potential appearance of neo-classical tearingmore » modes, triggered by long period sawtooth crashes even at low beta, becomes predictable and therefore amenable to preemptive ECCD. The ITER Electron Cyclotron Upper Launcher (EC-UL) design incorporates the needed functionalities for this method to be applied. The methodology and associated TCV experiments will be presented.« less

  19. Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data

    PubMed Central

    Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.

    2016-01-01

    We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872

  20. A real-time control framework for urban water reservoirs operation

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Goedbloed, A.; Schwanenberg, D.

    2012-04-01

    Drinking water demand in urban areas is growing parallel to the worldwide urban population, and it is acquiring an increasing part of the total water consumption. Since the delivery of sufficient water volumes in urban areas represents a difficult logistic and economical problem, different metropolitan areas are evaluating the opportunity of constructing relatively small reservoirs within urban areas. Singapore, for example, is developing the so-called 'Four National Taps Strategies', which detects the maximization of water yields from local, urban catchments as one of the most important water sources. However, the peculiar location of these reservoirs can provide a certain advantage from the logistical point of view, but it can pose serious difficulties in their daily management. Urban catchments are indeed characterized by large impervious areas: this results in a change of the hydrological cycle, with decreased infiltration and groundwater recharge, and increased patterns of surface and river discharges, with higher peak flows, volumes and concentration time. Moreover, the high concentrations of nutrients and sediments characterizing urban discharges can cause further water quality problems. In this critical hydrological context, the effective operation of urban water reservoirs must rely on real-time control techniques, which can exploit hydro-meteorological information available in real-time from hydrological and nowcasting models. This work proposes a novel framework for the real-time control of combined water quality and quantity objectives in urban reservoirs. The core of this framework is a non-linear Model Predictive Control (MPC) scheme, which employs the current state of the system, the future discharges furnished by a predictive model and a further model describing the internal dynamics of the controlled sub-system to determine an optimal control sequence over a finite prediction horizon. The main advantage of this scheme stands in its reduced computational requests and the capability of exploiting real-time hydro-meteorological information, which are crucial for an effective operation of these fast-varying hydrological systems. The framework is here demonstrated on the operation of Marina Reservoir (Singapore), whose recent construction in late 2008 increased the effective catchment area to about 50% of the total available. Its operation, which accounts for drinking water supply, flash floods control and water quality standards, is here designed by combining the MPC scheme with the process-based hydrological model SOBEK. Extensive simulation experiments show the validity of the proposed framework.

  1. The Linear Quadratic Gaussian Multistage Game with Nonclassical Information Pattern Using a Direct Solution Method

    NASA Astrophysics Data System (ADS)

    Clemens, Joshua William

    Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.

  2. "Real-time" disintegration analysis and D-optimal experimental design for the optimization of diclofenac sodium fast-dissolving films.

    PubMed

    El-Malah, Yasser; Nazzal, Sami

    2013-01-01

    The objective of this work was to study the dissolution and mechanical properties of fast-dissolving films prepared from a tertiary mixture of pullulan, polyvinylpyrrolidone and hypromellose. Disintegration studies were performed in real-time by probe spectroscopy to detect the onset of film disintegration. Tensile strength and elastic modulus of the films were measured by texture analysis. Disintegration time of the films ranged from 21 to 105 seconds whereas their mechanical properties ranged from approximately 2 to 49 MPa for tensile strength and 1 to 21 MPa% for young's modulus. After generating polynomial models correlating the variables using a D-Optimal mixture design, an optimal formulation with desired responses was proposed by the statistical package. For validation, a new film formulation loaded with diclofenac sodium based on the optimized composition was prepared and tested for dissolution and tensile strength. Dissolution of the optimized film was found to commence almost immediately with 50% of the drug released within one minute. Tensile strength and young's modulus of the film were 11.21 MPa and 6, 78 MPa%, respectively. Real-time spectroscopy in conjunction with statistical design were shown to be very efficient for the optimization and development of non-conventional intraoral delivery system such as fast dissolving films.

  3. Multivariate Predictors of Music Perception and Appraisal by Adult Cochlear Implant Users

    PubMed Central

    Gfeller, Kate; Oleson, Jacob; Knutson, John F.; Breheny, Patrick; Driscoll, Virginia; Olszewski, Carol

    2009-01-01

    The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music. PMID:18669126

  4. Real-time CT-video registration for continuous endoscopic guidance

    NASA Astrophysics Data System (ADS)

    Merritt, Scott A.; Rai, Lav; Higgins, William E.

    2006-03-01

    Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.

  5. An image based method for crop yield prediction using remotely sensed and crop canopy data: the case of Paphos district, western Cyprus

    NASA Astrophysics Data System (ADS)

    Papadavid, G.; Hadjimitsis, D.

    2014-08-01

    Remote sensing techniques development have provided the opportunity for optimizing yields in the agricultural procedure and moreover to predict the forthcoming yield. Yield prediction plays a vital role in Agricultural Policy and provides useful data to policy makers. In this context, crop and soil parameters along with NDVI index which are valuable sources of information have been elaborated statistically to test if a) Durum wheat yield can be predicted and b) when is the actual time-window to predict the yield in the district of Paphos, where Durum wheat is the basic cultivation and supports the rural economy of the area. 15 plots cultivated with Durum wheat from the Agricultural Research Institute of Cyprus for research purposes, in the area of interest, have been under observation for three years to derive the necessary data. Statistical and remote sensing techniques were then applied to derive and map a model that can predict yield of Durum wheat in this area. Indeed the semi-empirical model developed for this purpose, with very high correlation coefficient R2=0.886, has shown in practice that can predict yields very good. Students T test has revealed that predicted values and real values of yield have no statistically significant difference. The developed model can and will be further elaborated with more parameters and applied for other crops in the near future.

  6. Integrated Aeroservoelastic Optimization: Status and Direction

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The interactions of lightweight flexible airframe structures, steady and unsteady aerodynamics, and wide-bandwidth active controls on modern airplanes lead to considerable multidisciplinary design challenges. More than 25 years of mathematical and numerical methods' development, numerous basic research studies, simulations and wind-tunnel tests of simple models, wind-tunnel tests of complex models of real airplanes, as well as flight tests of actively controlled airplanes, have all contributed to the accumulation of a substantial body of knowledge in the area of aeroservoelasticity. A number of analysis codes, with the capabilities to model real airplane systems under the assumptions of linearity, have been developed. Many tests have been conducted, and results were correlated with analytical predictions. A selective sample of references covering aeroservoelastic testing programs from the 1960s to the early 1980s, as well as more recent wind-tunnel test programs of real or realistic configurations, are included in the References section of this paper. An examination of references 20-29 will reveal that in the course of development (or later modification), of almost every modern airplane with a high authority active control system, there arose a need to face aeroservoelastic problems and aeroservoelastic design challenges.

  7. Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand

    PubMed Central

    Lauer, Stephen A.; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E.; Salje, Henrik; Cummings, Derek A. T.; Lessler, Justin

    2016-01-01

    Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making. PMID:27304062

  8. Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand.

    PubMed

    Reich, Nicholas G; Lauer, Stephen A; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E; Salje, Henrik; Cummings, Derek A T; Lessler, Justin

    2016-06-01

    Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making.

  9. Multiobjective optimization of combinatorial libraries.

    PubMed

    Agrafiotis, D K

    2002-01-01

    Combinatorial chemistry and high-throughput screening have caused a fundamental shift in the way chemists contemplate experiments. Designing a combinatorial library is a controversial art that involves a heterogeneous mix of chemistry, mathematics, economics, experience, and intuition. Although there seems to be little agreement as to what constitutes an ideal library, one thing is certain: only one property or measure seldom defines the quality of the design. In most real-world applications, a good experiment requires the simultaneous optimization of several, often conflicting, design objectives, some of which may be vague and uncertain. In this paper, we discuss a class of algorithms for subset selection rooted in the principles of multiobjective optimization. Our approach is to employ an objective function that encodes all of the desired selection criteria, and then use a simulated annealing or evolutionary approach to identify the optimal (or a nearly optimal) subset from among the vast number of possibilities. Many design criteria can be accommodated, including diversity, similarity to known actives, predicted activity and/or selectivity determined by quantitative structure-activity relationship (QSAR) models or receptor binding models, enforcement of certain property distributions, reagent cost and availability, and many others. The method is robust, convergent, and extensible, offers the user full control over the relative significance of the various objectives in the final design, and permits the simultaneous selection of compounds from multiple libraries in full- or sparse-array format.

  10. Detection of MDR1 mRNA expression with optimized gold nanoparticle beacon

    NASA Astrophysics Data System (ADS)

    Zhou, Qiumei; Qian, Zhiyu; Gu, Yueqing

    2016-03-01

    MDR1 (multidrug resistance gene) mRNA expression is a promising biomarker for the prediction of doxorubicin resistance in clinic. However, the traditional technical process in clinic is complicated and cannot perform the real-time detection mRNA in living single cells. In this study, the expression of MDR1 mRNA was analyzed based on optimized gold nanoparticle beacon in tumor cells. Firstly, gold nanoparticle (AuNP) was modified by thiol-PEG, and the MDR1 beacon sequence was screened and optimized using a BLAST bioinformatics strategy. Then, optimized MDR1 molecular beacons were characterized by transmission electron microscope, UV-vis and fluorescence spectroscopies. The cytotoxicity of MDR1 molecular beacon on L-02, K562 and K562/Adr cells were investigated by MTT assay, suggesting that MDR1 molecular beacon was low inherent cytotoxicity. Dark field microscope was used to investigate the cellular uptake of hDAuNP beacon assisted with ultrasound. Finally, laser scanning confocal microscope images showed that there was a significant difference in MDR1 mRNA expression in K562 and K562/Adr cells, which was consistent with the results of q-PCR measurement. In summary, optimized MDR1 molecular beacon designed in this study is a reliable strategy for detection MDR1 mRNA expression in living tumor cells, and will be a promising strategy for in guiding patient treatment and management in individualized medication.

  11. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    PubMed

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Research on Optimization of GLCM Parameter in Cell Classification

    NASA Astrophysics Data System (ADS)

    Zhang, Xi-Kun; Hou, Jie; Hu, Xin-Hua

    2016-05-01

    Real-time classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. Gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images ,which are too complicated to coordinate with the real-time system for a large amount of calculation. An optimization of GLCM algorithm is provided based on correlation analysis of GLCM parameters. The results of GLCM analysis and subsequent classification demonstrate optimized method can lower the time complexity significantly without loss of classification accuracy.

  13. Rational positive real approximations for LQG optimal compensators arising in active stabilization of flexible structures

    NASA Technical Reports Server (NTRS)

    Desantis, A.

    1994-01-01

    In this paper the approximation problem for a class of optimal compensators for flexible structures is considered. The particular case of a simply supported truss with an offset antenna is dealt with. The nonrational positive real optimal compensator transfer function is determined, and it is proposed that an approximation scheme based on a continued fraction expansion method be used. Comparison with the more popular modal expansion technique is performed in terms of stability margin and parameters sensitivity of the relative approximated closed loop transfer functions.

  14. Real-time stylistic prediction for whole-body human motions.

    PubMed

    Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun

    2012-01-01

    The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Real-time validation of receiver state information in optical space-time block code systems.

    PubMed

    Alamia, John; Kurzweg, Timothy

    2014-06-15

    Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.

  16. Manual control models of industrial management

    NASA Technical Reports Server (NTRS)

    Crossman, E. R. F. W.

    1972-01-01

    The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.

  17. Real time monitoring of water distribution in an operando fuel cell during transient states

    NASA Astrophysics Data System (ADS)

    Martinez, N.; Peng, Z.; Morin, A.; Porcar, L.; Gebel, G.; Lyonnard, S.

    2017-10-01

    The water distribution of an operating proton exchange membrane fuel cell (PEMFC) was monitored in real time by using Small Angle Neutron Scattering (SANS). The formation of liquid water was obtained simultaneously with the evolution of the water content inside the membrane. Measurements were performed when changing current with a time resolution of 10 s, providing insights on the kinetics of water management prior to the stationary phase. We confirmed that water distribution is strongly heterogeneous at the scale at of the whole Membrane Electrode Assembly. As already reported, at the local scale there is no straightforward link between the amounts of water present inside and outside the membrane. However, we show that the temporal evolutions of these two parameters are strongly correlated. In particular, the local membrane water content is nearly instantaneously correlated to the total liquid water content, whether it is located at the anode or cathode side. These results can help in optimizing 3D stationary diphasic models used to predict PEMFC water distribution.

  18. Robotic Billiards: Understanding Humans in Order to Counter Them.

    PubMed

    Nierhoff, Thomas; Leibrandt, Konrad; Lorenz, Tamara; Hirche, Sandra

    2016-08-01

    Ongoing technological advances in the areas of computation, sensing, and mechatronics enable robotic-based systems to interact with humans in the real world. To succeed against a human in a competitive scenario, a robot must anticipate the human behavior and include it in its own planning framework. Then it can predict the next human move and counter it accordingly, thus not only achieving overall better performance but also systematically exploiting the opponent's weak spots. Pool is used as a representative scenario to derive a model-based planning and control framework where not only the physics of the environment but also a model of the opponent is considered. By representing the game of pool as a Markov decision process and incorporating a model of the human decision-making based on studies, an optimized policy is derived. This enables the robot to include the opponent's typical game style into its tactical considerations when planning a stroke. The results are validated in simulations and real-life experiments with an anthropomorphic robot playing pool against a human.

  19. Spectral analysis method and sample generation for real time visualization of speech

    NASA Astrophysics Data System (ADS)

    Hobohm, Klaus

    A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.

  20. Springback effects during single point incremental forming: Optimization of the tool path

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick

    2018-05-01

    Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.

  1. Influence of model errors in optimal sensor placement

    NASA Astrophysics Data System (ADS)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  2. A heterogeneous artificial stock market model can benefit people against another financial crisis

    PubMed Central

    2018-01-01

    This paper presents results of an artificial stock market and tries to make it more consistent with the statistical features of real stock data. Based on the SFI-ASM, a novel model is proposed to make agents more close to the real world. Agents are divided into four kinds in terms of different learning speeds, strategy-sizes, utility functions, and level of intelligence; and a crucial parameter has been found to ensure system stability. So, some parameters are appended to make the model which contains zero-intelligent and less-intelligent agents run steadily. Moreover, considering real stock markets change violently due to the financial crisis; the real stock markets are divided into two segments, before the financial crisis and after it. The optimal modified model before the financial crisis fails to replicate the statistical features of the real market after the financial crisis. Then, the optimal model after the financial crisis is shown. The experiments indicate that the optimal model after the financial crisis is able to replicate several of real market phenomena, including the first-order autocorrelation, kurtosis, standard deviation of yield series and first-order autocorrelation of yield square. We point out that there is a structural change in stock markets after the financial crisis, which can benefit people forecast the financial crisis. PMID:29912893

  3. A heterogeneous artificial stock market model can benefit people against another financial crisis.

    PubMed

    Yang, Haijun; Chen, Shuheng

    2018-01-01

    This paper presents results of an artificial stock market and tries to make it more consistent with the statistical features of real stock data. Based on the SFI-ASM, a novel model is proposed to make agents more close to the real world. Agents are divided into four kinds in terms of different learning speeds, strategy-sizes, utility functions, and level of intelligence; and a crucial parameter has been found to ensure system stability. So, some parameters are appended to make the model which contains zero-intelligent and less-intelligent agents run steadily. Moreover, considering real stock markets change violently due to the financial crisis; the real stock markets are divided into two segments, before the financial crisis and after it. The optimal modified model before the financial crisis fails to replicate the statistical features of the real market after the financial crisis. Then, the optimal model after the financial crisis is shown. The experiments indicate that the optimal model after the financial crisis is able to replicate several of real market phenomena, including the first-order autocorrelation, kurtosis, standard deviation of yield series and first-order autocorrelation of yield square. We point out that there is a structural change in stock markets after the financial crisis, which can benefit people forecast the financial crisis.

  4. Optimal design of a lagrangian observing system for hydrodynamic surveys in coastal areas

    NASA Astrophysics Data System (ADS)

    Cucco, Andrea; Quattrocchi, Giovanni; Antognarelli, Fabio; Satta, Andrea; Maicu, Francesco; Ferrarin, Christian; Umgiesser, Georg

    2014-05-01

    The optimization of ocean observing systems is a pressing need for scientific research. In particular, the improvement of ocean short-term observing networks is achievable by reducing the cost-benefit ratio of the field campaigns and by increasing the quality of measurements. Numerical modeling is a powerful tool for determining the appropriateness of a specific observing system and for optimizing the sampling design. This is particularly true when observations are carried out in coastal areas and lagoons where, the use satellites is prohibitive due to the water shallowness. For such areas, numerical models are the most efficient tool both to provide a preliminary assess of the local physical environment and to make short -term predictions above its change. In this context, a test case experiment was carried out within an enclosed shallow water areas, the Cabras Lagoon (Sardinia, Italy). The aim of the experiment was to explore the optimal design for a field survey based on the use of coastal lagrangian buoys. A three-dimensional hydrodynamic model based on the finite element method (SHYFEM3D, Umgiesser et al., 2004) was implemented to simulate the lagoon water circulation. The model domain extent to the whole Cabras lagoon and to the whole Oristano Gulf, including the surrounding coastal area. Lateral open boundary conditions were provided by the operational ocean model system WMED and only wind forcing, provided by SKIRON atmospheric model (Kallos et al., 1997), was considered as surface boundary conditions. The model was applied to provide a number of ad hoc scenarios and to explore the efficiency of the short-term hydrodynamic survey. A first field campaign was carried out to investigate the lagrangian circulation inside the lagoon under the main wind forcing condition (Mistral wind from North-West). The trajectories followed by the lagrangian buoys and the estimated lagrangian velocities were used to calibrate the model parameters and to validate the simulation results. A set of calibration runs were performed and the model accuracy in reproducing the surface circulation were defined. Therefore, a numerical simulation was conducted to predict the wind induced lagoon water circulation and the paths followed by numerical particles inside the lagoon domain. The simulated particles paths was analyzed and the optimal configuration for the buoys deployment was designed in real-time. The selected deployment geometry was then tested during a further field campaign. The obtained dataset revealed that the chosen measurement strategy provided a near-synoptic survey with the longest records for the considered specific observing experiment. This work is aimed to emphasize the mutual usefulness of observations and numerical simulations in coastal ocean applications and it proposes an efficient approach to harmonize different expertise toward the investigation of a given specific research issue. A Cucco, M Sinerchia, A Ribotti, A Olita, L Fazioli, A Perilli, B Sorgente, M Borghini, K Schroeder, R Sorgente. 2012. A high-resolution real-time forecasting system for predicting the fate of oil spills in the Strait of Bonifacio (western Mediterranean Sea). Marine Pollution Bulletin. 64. 6, 1186-1200. Kallos, G., Nickovic, S., Papadopoulos, A., Jovic, D., Kakaliagou, O., Misirlis, N., Boukas, L., Mimikou, N., G., S., J., P., Anadranistakis, E., and Manousakis, M.. 1997. The regional weather forecasting system Skiron: An overview, in: Proceedings of the Symposium on Regional Weather Prediction on Parallel Computer Environments, 109-122, Athens, Greece. Umgiesser, G., Melaku Canu, D., Cucco, A., Solidoro, C., 2004. A finite element model for the Venice Lagoon. Development, set up, calibration and validation. Journal of Marine Systems 51, 123-145.

  5. Optimized positioning of autonomous surgical lamps

    NASA Astrophysics Data System (ADS)

    Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel

    2017-03-01

    We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

  6. Affordable and personalized lighting using inverse modeling and virtual sensors

    NASA Astrophysics Data System (ADS)

    Basu, Chandrayee; Chen, Benjamin; Richards, Jacob; Dhinakaran, Aparna; Agogino, Alice; Martin, Rodney

    2014-03-01

    Wireless sensor networks (WSN) have great potential to enable personalized intelligent lighting systems while reducing building energy use by 50%-70%. As a result WSN systems are being increasingly integrated in state-ofart intelligent lighting systems. In the future these systems will enable participation of lighting loads as ancillary services. However, such systems can be expensive to install and lack the plug-and-play quality necessary for user-friendly commissioning. In this paper we present an integrated system of wireless sensor platforms and modeling software to enable affordable and user-friendly intelligent lighting. It requires ⇠ 60% fewer sensor deployments compared to current commercial systems. Reduction in sensor deployments has been achieved by optimally replacing the actual photo-sensors with real-time discrete predictive inverse models. Spatially sparse and clustered sub-hourly photo-sensor data captured by the WSN platforms are used to develop and validate a piece-wise linear regression of indoor light distribution. This deterministic data-driven model accounts for sky conditions and solar position. The optimal placement of photo-sensors is performed iteratively to achieve the best predictability of the light field desired for indoor lighting control. Using two weeks of daylight and artificial light training data acquired at the Sustainability Base at NASA Ames, the model was able to predict the light level at seven monitored workstations with 80%-95% accuracy. We estimate that 10% adoption of this intelligent wireless sensor system in commercial buildings could save 0.2-0.25 quads BTU of energy nationwide.

  7. High-resolution Modeling Assisted Design of Customized and Individualized Transcranial Direct Current Stimulation Protocols

    PubMed Central

    Bikson, Marom; Rahman, Asif; Datta, Abhishek; Fregni, Felipe; Merabet, Lotfi

    2012-01-01

    Objectives Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that delivers low-intensity currents facilitating or inhibiting spontaneous neuronal activity. tDCS is attractive since dose is readily adjustable by simply changing electrode number, position, size, shape, and current. In the recent past, computational models have been developed with increased precision with the goal to help customize tDCS dose. The aim of this review is to discuss the incorporation of high-resolution patient-specific computer modeling to guide and optimize tDCS. Methods In this review, we discuss the following topics: (i) The clinical motivation and rationale for models of transcranial stimulation is considered pivotal in order to leverage the flexibility of neuromodulation; (ii) The protocols and the workflow for developing high-resolution models; (iii) The technical challenges and limitations of interpreting modeling predictions, and (iv) Real cases merging modeling and clinical data illustrating the impact of computational models on the rational design of rehabilitative electrotherapy. Conclusions Though modeling for non-invasive brain stimulation is still in its development phase, it is predicted that with increased validation, dissemination, simplification and democratization of modeling tools, computational forward models of neuromodulation will become useful tools to guide the optimization of clinical electrotherapy. PMID:22780230

  8. Integrated modeling approach for optimal management of water, energy and food security nexus

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Vesselinov, Velimir V.

    2017-03-01

    Water, energy and food (WEF) are inextricably interrelated. Effective planning and management of limited WEF resources to meet current and future socioeconomic demands for sustainable development is challenging. WEF production/delivery may also produce environmental impacts; as a result, green-house-gas emission control will impact WEF nexus management as well. Nexus management for WEF security necessitates integrated tools for predictive analysis that are capable of identifying the tradeoffs among various sectors, generating cost-effective planning and management strategies and policies. To address these needs, we have developed an integrated model analysis framework and tool called WEFO. WEFO provides a multi-period socioeconomic model for predicting how to satisfy WEF demands based on model inputs representing productions costs, socioeconomic demands, and environmental controls. WEFO is applied to quantitatively analyze the interrelationships and trade-offs among system components including energy supply, electricity generation, water supply-demand, food production as well as mitigation of environmental impacts. WEFO is demonstrated to solve a hypothetical nexus management problem consistent with real-world management scenarios. Model parameters are analyzed using global sensitivity analysis and their effects on total system cost are quantified. The obtained results demonstrate how these types of analyses can be helpful for decision-makers and stakeholders to make cost-effective decisions for optimal WEF management.

  9. Real Estate Site Selection: An Application of Artificial Intelligence for Military Retail Facilities

    DTIC Science & Technology

    2006-09-01

    Information and Spatial Analysis (SCGISA), University of Sheffield. Kotler , P. (1984). Marketing Management: Analysis, Planning, and Control...Spatial Distribution of Retail Sales. Journal of Real Estate Finance and Economics, Vol. 31 Iss. 1, 53. Lilien, G., & Kotler , P. (1983). Marketing ...commissaries). The current business model for military retail facilities may not be optimized based upon current trends market data. Optimizing

  10. Tabu Search enhances network robustness under targeted attacks

    NASA Astrophysics Data System (ADS)

    Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi

    2016-03-01

    We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.

  11. A real-time ocean reanalyses intercomparison project in the context of tropical pacific observing system and ENSO monitoring

    NASA Astrophysics Data System (ADS)

    Xue, Yan; Wen, C.; Kumar, A.; Balmaseda, M.; Fujii, Y.; Alves, O.; Martin, M.; Yang, X.; Vernieres, G.; Desportes, C.; Lee, T.; Ascione, I.; Gudgel, R.; Ishikawa, I.

    2017-12-01

    An ensemble of nine operational ocean reanalyses (ORAs) is now routinely collected, and is used to monitor the consistency across the tropical Pacific temperature analyses in real-time in support of ENSO monitoring, diagnostics, and prediction. The ensemble approach allows a more reliable estimate of the signal as well as an estimation of the noise among analyses. The real-time estimation of signal-to-noise ratio assists the prediction of ENSO. The ensemble approach also enables us to estimate the impact of the Tropical Pacific Observing System (TPOS) on the estimation of ENSO-related oceanic indicators. The ensemble mean is shown to have a better accuracy than individual ORAs, suggesting the ensemble approach is an effective tool to reduce uncertainties in temperature analysis for ENSO. The ensemble spread, as a measure of uncertainties in ORAs, is shown to be partially linked to the data counts of in situ observations. Despite the constraints by TPOS data, uncertainties in ORAs are still large in the northwestern tropical Pacific, in the SPCZ region, as well as in the central and northeastern tropical Pacific. The uncertainties in total temperature reduced significantly in 2015 due to the recovery of the TAO/TRITON array to approach the value before the TAO crisis in 2012. However, the uncertainties in anomalous temperature remained much higher than the pre-2012 value, probably due to uncertainties in the reference climatology. This highlights the importance of the long-term stability of the observing system for anomaly monitoring. The current data assimilation systems tend to constrain the solution very locally near the buoy sites, potentially damaging the larger-scale dynamical consistency. So there is an urgent need to improve data assimilation systems so that they can optimize the observation information from TPOS and contribute to improved ENSO prediction.

  12. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  13. Exploring the Combination of Dempster-Shafer Theory and Neural Network for Predicting Trust and Distrust

    PubMed Central

    Wang, Xin; Wang, Ying; Sun, Hongbin

    2016-01-01

    In social media, trust and distrust among users are important factors in helping users make decisions, dissect information, and receive recommendations. However, the sparsity and imbalance of social relations bring great difficulties and challenges in predicting trust and distrust. Meanwhile, there are numerous inducing factors to determine trust and distrust relations. The relationship among inducing factors may be dependency, independence, and conflicting. Dempster-Shafer theory and neural network are effective and efficient strategies to deal with these difficulties and challenges. In this paper, we study trust and distrust prediction based on the combination of Dempster-Shafer theory and neural network. We firstly analyze the inducing factors about trust and distrust, namely, homophily, status theory, and emotion tendency. Then, we quantify inducing factors of trust and distrust, take these features as evidences, and construct evidence prototype as input nodes of multilayer neural network. Finally, we propose a framework of predicting trust and distrust which uses multilayer neural network to model the implementing process of Dempster-Shafer theory in different hidden layers, aiming to overcome the disadvantage of Dempster-Shafer theory without optimization method. Experimental results on a real-world dataset demonstrate the effectiveness of the proposed framework. PMID:27034651

  14. Co-optimal distribution of leaf nitrogen and hydraulic conductance in plant canopies.

    PubMed

    Peltoniemi, Mikko S; Duursma, Remko A; Medlyn, Belinda E

    2012-05-01

    Leaf properties vary significantly within plant canopies, due to the strong gradient in light availability through the canopy, and the need for plants to use resources efficiently. At high light, photosynthesis is maximized when leaves have a high nitrogen content and water supply, whereas at low light leaves have a lower requirement for both nitrogen and water. Studies of the distribution of leaf nitrogen (N) within canopies have shown that, if water supply is ignored, the optimal distribution is that where N is proportional to light, but that the gradient of N in real canopies is shallower than the optimal distribution. We extend this work by considering the optimal co-allocation of nitrogen and water supply within plant canopies. We developed a simple 'toy' two-leaf canopy model and optimized the distribution of N and hydraulic conductance (K) between the two leaves. We asked whether hydraulic constraints to water supply can explain shallow N gradients in canopies. We found that the optimal N distribution within plant canopies is proportional to the light distribution only if hydraulic conductance, K, is also optimally distributed. The optimal distribution of K is that where K and N are both proportional to incident light, such that optimal K is highest to the upper canopy. If the plant is constrained in its ability to construct higher K to sun-exposed leaves, the optimal N distribution does not follow the gradient in light within canopies, but instead follows a shallower gradient. We therefore hypothesize that measured deviations from the predicted optimal distribution of N could be explained by constraints on the distribution of K within canopies. Further empirical research is required on the extent to which plants can construct optimal K distributions, and whether shallow within-canopy N distributions can be explained by sub-optimal K distributions.

  15. Bus-stop Based Real Time Passenger Information System - Case Study Maribor

    NASA Astrophysics Data System (ADS)

    Čelan, Marko; Klemenčič, Mitja; Mrgole, Anamarija L.; Lep, Marjan

    2017-10-01

    Real time passenger information system is one of the key element of promoting public transport. For the successful implementation of real time passenger information systems, various components should be considered, such as: passenger needs and requirements, stakeholder involvement, technological solution for tracking, data transfer, etc. This article carrying out designing and evaluation of real time passenger information (RTPI) in the city of Maribor. The design phase included development of methodology for selection of appropriate macro and micro location of the real-time panel, development of a real-time passenger algorithm, definition of a technical specification, financial issues and time frame. The evaluation shows that different people have different requirements; therefore, the system should be adaptable to be used by various types of people, according to the age, the purpose of journey, experience of using public transport, etc. The average difference between perceived waiting time for a bus is 35% higher than the actual waiting time and grow with the headway increase. Experiences from Maribor have shown that the reliability of real time passenger system (from technical point of view) must be close to 100%, otherwise the system may have negative impact on passengers and may discourage the use of public transport. Among considered events of arrivals during the test period, 92% of all prediction were accurate. The cost benefit analysis has focused only on potential benefits from reduced perceived users waiting time and foreseen costs of real time information system in Maribor for 10 years’ period. Analysis shows that the optimal number for implementing real time passenger information system at the bus stops in Maribor is set on 83 bus stops (approx. 20 %) with the highest number of passenger. If we consider all entries at the chosen bus stops, the total perceived waiting time on yearly level could be decreased by about 60,000 hours.

  16. An auxiliary optimization method for complex public transit route network based on link prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  17. Flight Test of an Adaptive Configuration Optimization System for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Georgie, Jennifer; Barnicki, Joseph S.

    1999-01-01

    A NASA Dryden Flight Research Center program explores the practical application of real-time adaptive configuration optimization for enhanced transport performance on an L-1011 aircraft. This approach is based on calculation of incremental drag from forced-response, symmetric, outboard aileron maneuvers. In real-time operation, the symmetric outboard aileron deflection is directly optimized, and the horizontal stabilator and angle of attack are indirectly optimized. A flight experiment has been conducted from an onboard research engineering test station, and flight research results are presented herein. The optimization system has demonstrated the capability of determining the minimum drag configuration of the aircraft in real time. The drag-minimization algorithm is capable of identifying drag to approximately a one-drag-count level. Optimizing the symmetric outboard aileron position realizes a drag reduction of 2-3 drag counts (approximately 1 percent). Algorithm analysis of maneuvers indicate that two-sided raised-cosine maneuvers improve definition of the symmetric outboard aileron drag effect, thereby improving analysis results and consistency. Ramp maneuvers provide a more even distribution of data collection as a function of excitation deflection than raised-cosine maneuvers provide. A commercial operational system would require airdata calculations and normal output of current inertial navigation systems; engine pressure ratio measurements would be optional.

  18. Toward a science of tumor forecasting for clinical oncology

    DOE PAGES

    Yankeelov, Thomas E.; Quaranta, Vito; Evans, Katherine J.; ...

    2015-03-15

    We propose that the quantitative cancer biology community makes a concerted effort to apply lessons from weather forecasting to develop an analogous methodology for predicting and evaluating tumor growth and treatment response. Currently, the time course of tumor response is not predicted; instead, response is only assessed post hoc by physical examination or imaging methods. This fundamental practice within clinical oncology limits optimization of a treatment regimen for an individual patient, as well as to determine in real time whether the choice was in fact appropriate. This is especially frustrating at a time when a panoply of molecularly targeted therapiesmore » is available, and precision genetic or proteomic analyses of tumors are an established reality. By learning from the methods of weather and climate modeling, we submit that the forecasting power of biophysical and biomathematical modeling can be harnessed to hasten the arrival of a field of predictive oncology. Furthermore, with a successful methodology toward tumor forecasting, it should be possible to integrate large tumor-specific datasets of varied types and effectively defeat one cancer patient at a time.« less

  19. Towards a Science of Tumor Forecasting for Clinical Oncology

    PubMed Central

    Yankeelov, Thomas E.; Quaranta, Vito; Evans, Katherine J.; Rericha, Erin C.

    2015-01-01

    We propose that the quantitative cancer biology community make a concerted effort to apply lessons from weather forecasting to develop an analogous methodology for predicting and evaluating tumor growth and treatment response. Currently, the time course of tumor response is not predicted; instead, response is- only assessed post hoc by physical exam or imaging methods. This fundamental practice within clinical oncology limits optimization of atreatment regimen for an individual patient, as well as to determine in real time whether the choice was in fact appropriate. This is especially frustrating at a time when a panoply of molecularly targeted therapies is available, and precision genetic or proteomic analyses of tumors are an established reality. By learning from the methods of weather and climate modeling, we submit that the forecasting power of biophysical and biomathematical modeling can be harnessed to hasten the arrival of a field of predictive oncology. With a successful methodology towards tumor forecasting, it should be possible to integrate large tumor specific datasets of varied types, and effectively defeat cancer one patient at a time. PMID:25592148

  20. Toward a science of tumor forecasting for clinical oncology.

    PubMed

    Yankeelov, Thomas E; Quaranta, Vito; Evans, Katherine J; Rericha, Erin C

    2015-03-15

    We propose that the quantitative cancer biology community makes a concerted effort to apply lessons from weather forecasting to develop an analogous methodology for predicting and evaluating tumor growth and treatment response. Currently, the time course of tumor response is not predicted; instead, response is only assessed post hoc by physical examination or imaging methods. This fundamental practice within clinical oncology limits optimization of a treatment regimen for an individual patient, as well as to determine in real time whether the choice was in fact appropriate. This is especially frustrating at a time when a panoply of molecularly targeted therapies is available, and precision genetic or proteomic analyses of tumors are an established reality. By learning from the methods of weather and climate modeling, we submit that the forecasting power of biophysical and biomathematical modeling can be harnessed to hasten the arrival of a field of predictive oncology. With a successful methodology toward tumor forecasting, it should be possible to integrate large tumor-specific datasets of varied types and effectively defeat one cancer patient at a time. ©2015 American Association for Cancer Research.

  1. Physicochemical approach to freshwater microalgae harvesting with magnetic particles.

    PubMed

    Prochazkova, Gita; Podolova, Nikola; Safarik, Ivo; Zachleder, Vilem; Branyik, Tomas

    2013-12-01

    Magnetic harvesting of microalgal biomass provides an attractive alternative to conventional methods. The approach to this issue has so far been pragmatic, focused mainly on finding cheap magnetic agents in combination with harvestable microalgae species. The aim of this work was to study experimentally and theoretically the mechanisms leading to cell-magnetic agent attachment/detachment using real experiments and predictions made by colloidal adhesion (XDLVO) model. Two types of well defined magnetic beads (MBs) carrying ion exchange functional groups (DEAE - diethylaminoethyl and PEI - polyethylenimine) were studied in connection with microalgae (Chlorella vulgaris). Optimal harvesting efficiencies (>90%) were found for DEAE and PEI MBs, while efficient detachment was achieved only for DEAE MBs (>90%). These findings were in accordance with the predictions by XDLVO model. Simultaneously there was found a discrepancy between the XDLVO prediction and the poor detachment of PEI MBs from microalgal surface. This can be ascribed to an additional interaction (probably covalent bonds) between PEI and algal surface, which the XDLVO model is unable to capture given by its non-covalent nature. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  3. Real Time Monitoring and Prediction of the Monsoon Intraseasonal Oscillations: An index based on Nonlinear Laplacian Spectral Analysis Technique

    NASA Astrophysics Data System (ADS)

    Cherumadanakadan Thelliyil, S.; Ravindran, A. M.; Giannakis, D.; Majda, A.

    2016-12-01

    An improved index for real time monitoring and forecast verification of monsoon intraseasonal oscillations (MISO) is introduced using the recently developed Nonlinear Laplacian Spectral Analysis (NLSA) algorithm. Previous studies has demonstrated the proficiency of NLSA in capturing low frequency variability and intermittency of a time series. Using NLSA a hierarchy of Laplace-Beltrami (LB) eigen functions are extracted from the unfiltered daily GPCP rainfall data over the south Asian monsoon region. Two modes representing the full life cycle of complex northeastward propagating boreal summer MISO are identified from the hierarchy of Laplace-Beltrami eigen functions. These two MISO modes have a number of advantages over the conventionally used Extended Empirical Orthogonal Function (EEOF) MISO modes including higher memory and better predictability, higher fractional variance over the western Pacific, Western Ghats and adjoining Arabian Sea regions and more realistic representation of regional heat sources associated with the MISO. The skill of NLSA based MISO indices in real time prediction of MISO is demonstrated using hindcasts of CFSv2 extended range prediction runs. It is shown that these indices yield a higher prediction skill than the other conventional indices supporting the use of NLSA in real time prediction of MISO. Real time monitoring and prediction of MISO finds its application in agriculture, construction and hydro-electric power sectors and hence an important component of monsoon prediction.

  4. Imaging multicellular specimens with real-time optimized tiling light-sheet selective plane illumination microscopy

    PubMed Central

    Fu, Qinyi; Martin, Benjamin L.; Matus, David Q.; Gao, Liang

    2016-01-01

    Despite the progress made in selective plane illumination microscopy, high-resolution 3D live imaging of multicellular specimens remains challenging. Tiling light-sheet selective plane illumination microscopy (TLS-SPIM) with real-time light-sheet optimization was developed to respond to the challenge. It improves the 3D imaging ability of SPIM in resolving complex structures and optimizes SPIM live imaging performance by using a real-time adjustable tiling light sheet and creating a flexible compromise between spatial and temporal resolution. We demonstrate the 3D live imaging ability of TLS-SPIM by imaging cellular and subcellular behaviours in live C. elegans and zebrafish embryos, and show how TLS-SPIM can facilitate cell biology research in multicellular specimens by studying left-right symmetry breaking behaviour of C. elegans embryos. PMID:27004937

  5. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  6. Adaptive Network Dynamics - Modeling and Control of Time-Dependent Social Contacts

    PubMed Central

    Schwartz, Ira B.; Shaw, Leah B.; Shkarayev, Maxim S.

    2013-01-01

    Real networks consisting of social contacts do not possess static connections. That is, social connections may be time dependent due to a variety of individual behavioral decisions based on current network connections. Examples of adaptive networks occur in epidemics, where information about infectious individuals may change the rewiring of healthy people, or in the recruitment of individuals to a cause or fad, where rewiring may optimize recruitment of susceptible individuals. In this paper, we will review some of the dynamical properties of adaptive networks, and show how they predict novel phenomena as well as yield insight into new controls. The applications will be control of epidemic outbreaks and terrorist recruitment modeling. PMID:25414913

  7. Nonsequential modeling of laser diode stacks using Zemax: simulation, optimization, and experimental validation.

    PubMed

    Coluccelli, Nicola

    2010-08-01

    Modeling a real laser diode stack based on Zemax ray tracing software that operates in a nonsequential mode is reported. The implementation of the model is presented together with the geometric and optical parameters to be adjusted to calibrate the model and to match the simulated intensity irradiance profiles with the experimental profiles. The calibration of the model is based on a near-field and a far-field measurement. The validation of the model has been accomplished by comparing the simulated and experimental transverse irradiance profiles at different positions along the caustic formed by a lens. Spot sizes and waist location are predicted with a maximum error below 6%.

  8. Revolutionize Situational Awareness in Emergencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hehlen, Markus Peter

    This report describes an integrated system that provides real-time actionable information to first responders. LANL will integrate three technologies to form an advanced predictive real-time sensor network including compact chemical and wind sensor sin low cost rugged package for outdoor installation; flexible robust communication architecture linking sensors in near-real time to globally accessible servers; and the QUIC code which predicts contamination transport and dispersal in urban environments in near real time.

  9. The Temporal Dimension of Linguistic Prediction

    ERIC Educational Resources Information Center

    Chow, Wing Yee

    2013-01-01

    This thesis explores how predictions about upcoming language inputs are computed during real-time language comprehension. Previous research has demonstrated humans' ability to use rich contextual information to compute linguistic prediction during real-time language comprehension, and it has been widely assumed that contextual information can…

  10. Fast and reliable method to estimate losses of single-mode waveguides with an arbitrary 2D trajectory.

    PubMed

    Negredo, F; Blaicher, M; Nesic, A; Kraft, P; Ott, J; Dörfler, W; Koos, C; Rockstuhl, C

    2018-06-01

    Photonic wire bonds, i.e., freeform waveguides written by 3D direct laser writing, emerge as a technology to connect different optical chips in fully integrated photonic devices. With the long-term vision of scaling up this technology to a large-scale fabrication process, the in situ optimization of the trajectory of photonic wire bonds is at stake. A prerequisite for the real-time optimization is the availability of a fast loss estimator for single-mode waveguides of arbitrary trajectory. Losses occur because of the bending of the waveguides and at transitions among sections of the waveguide with different curvatures. Here, we present an approach that resides on the fundamental mode approximation, i.e., the assumption that the photonic wire bonds predominantly carry their energy in a single mode. It allows us to predict in a quick and reliable way the pertinent losses from pre-computed modal properties of the waveguide, enabling fast design of optimum paths.

  11. The lawful imprecision of human surface tilt estimation in natural scenes

    PubMed Central

    2018-01-01

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477

  12. The lawful imprecision of human surface tilt estimation in natural scenes.

    PubMed

    Kim, Seha; Burge, Johannes

    2018-01-31

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.

  13. Real-time estimation of BDS/GPS high-rate satellite clock offsets using sequential least squares

    NASA Astrophysics Data System (ADS)

    Fu, Wenju; Yang, Yuanxi; Zhang, Qin; Huang, Guanwen

    2018-07-01

    The real-time precise satellite clock product is one of key prerequisites for real-time Precise Point Positioning (PPP). The accuracy of the 24-hour predicted satellite clock product with 15 min sampling interval and an update of 6 h provided by the International GNSS Service (IGS) is only 3 ns, which could not meet the needs of all real-time PPP applications. The real-time estimation of high-rate satellite clock offsets is an efficient method for improving the accuracy. In this paper, the sequential least squares method to estimate real-time satellite clock offsets with high sample rate is proposed to improve the computational speed by applying an optimized sparse matrix operation to compute the normal equation and using special measures to take full advantage of modern computer power. The method is first applied to BeiDou Navigation Satellite System (BDS) and provides real-time estimation with a 1 s sample rate. The results show that the amount of time taken to process a single epoch is about 0.12 s using 28 stations. The Standard Deviation (STD) and Root Mean Square (RMS) of the real-time estimated BDS satellite clock offsets are 0.17 ns and 0.44 ns respectively when compared to German Research Center for Geosciences (GFZ) final clock products. The positioning performance of the real-time estimated satellite clock offsets is evaluated. The RMSs of the real-time BDS kinematic PPP in east, north, and vertical components are 7.6 cm, 6.4 cm and 19.6 cm respectively. The method is also applied to Global Positioning System (GPS) with a 10 s sample rate and the computational time of most epochs is less than 1.5 s with 75 stations. The STD and RMS of the real-time estimated GPS satellite clocks are 0.11 ns and 0.27 ns, respectively. The accuracies of 5.6 cm, 2.6 cm and 7.9 cm in east, north, and vertical components are achieved for the real-time GPS kinematic PPP.

  14. Statistical and engineering methods for model enhancement

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Jung

    Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.

  15. Liver Stiffness Measured by Two-Dimensional Shear-Wave Elastography: Prognostic Value after Radiofrequency Ablation for Hepatocellular Carcinoma.

    PubMed

    Lee, Dong Ho; Lee, Jeong Min; Yoon, Jung-Hwan; Kim, Yoon Jun; Lee, Jeong-Hoon; Yu, Su Jong; Han, Joon Koo

    2018-03-01

    To evaluate the prognostic value of liver stiffness (LS) measured using two-dimensional (2D) shear-wave elastography (SWE) in patients with hepatocellular carcinoma (HCC) treated by radiofrequency ablation (RFA). The Institutional Review Board approved this retrospective study and informed consent was obtained from all patients. A total of 134 patients with up to 3 HCCs ≤5 cm who had undergone pre-procedural 2D-SWE prior to RFA treatment between January 2012 and December 2013 were enrolled. LS values were measured using real-time 2D-SWE before RFA on the procedural day. After a mean follow-up of 33.8 ± 9.9 months, we analyzed the overall survival after RFA using the Kaplan-Meier method and Cox proportional hazard regression model. The optimal cutoff LS value to predict overall survival was determined using the minimal p value approach. During the follow-up period, 22 patients died, and the estimated 1- and 3-year overall survival rates were 96.4 and 85.8%, respectively. LS measured by 2D-SWE was found to be a significant predictive factor for overall survival after RFA of HCCs, as was the presence of extrahepatic metastases. As for the optimal cutoff LS value for the prediction of overall survival, it was determined to be 13.3 kPa. In our study, 71 patients had LS values ≥13.3 kPa, and the estimated 3-year overall survival was 76.8% compared to 96.3% in 63 patients with LS values <13.3 kPa. This difference was statistically significant (hazard ratio = 4.30 [1.26-14.7]; p = 0.020). LS values measured by 2D-SWE was a significant predictive factor for overall survival after RFA for HCC.

  16. Road map to adaptive optimal control. [jet engine control

    NASA Technical Reports Server (NTRS)

    Boyer, R.

    1980-01-01

    A building block control structure leading toward adaptive, optimal control for jet engines is developed. This approach simplifies the addition of new features and allows for easier checkout of the control by providing a baseline system for comparison. Also, it is possible to eliminate certain features that do not have payoff by being selective in the addition of new building blocks to be added to the baseline system. The minimum risk approach specifically addresses the need for active identification of the plant to be controlled in real time and real time optimization of the control for the identified plant.

  17. Optical realization of optimal symmetric real state quantum cloning machine

    NASA Astrophysics Data System (ADS)

    Hu, Gui-Yu; Zhang, Wen-Hai; Ye, Liu

    2010-01-01

    We present an experimentally uniform linear optical scheme to implement the optimal 1→2 symmetric and optimal 1→3 symmetric economical real state quantum cloning machine of the polarization state of the single photon. This scheme requires single-photon sources and two-photon polarization entangled state as input states. It also involves linear optical elements and three-photon coincidence. Then we consider the realistic realization of the scheme by using the parametric down-conversion as photon resources. It is shown that under certain condition, the scheme is feasible by current experimental technology.

  18. Real-Time Optimization in Complex Stochastic Environment

    DTIC Science & Technology

    2015-06-24

    simpler ones, thus addressing scalability and the limited resources of networked wireless devices. This, however, comes at the expense of increased...Maximization of Wireless Sensor Networks with Non-ideal Batteries”, IEEE Trans. on Control of Network Systems, Vol. 1, 1, pp. 86-98, 2014. [27...C.G., “Optimal Energy-Efficient Downlink Transmission Scheduling for Real-Time Wireless Networks ”, subm. to IEEE Trans. on Control of Network Systems

  19. Controlling herding in minority game systems

    NASA Astrophysics Data System (ADS)

    Zhang, Ji-Qiang; Huang, Zi-Gang; Wu, Zhi-Xi; Su, Riqi; Lai, Ying-Cheng

    2016-02-01

    Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.

  20. Enhancing Nursing Staffing Forecasting With Safety Stock Over Lead Time Modeling.

    PubMed

    McNair, Douglas S

    2015-01-01

    In balancing competing priorities, it is essential that nursing staffing provide enough nurses to safely and effectively care for the patients. Mathematical models to predict optimal "safety stocks" have been routine in supply chain management for many years but have up to now not been applied in nursing workforce management. There are various aspects that exhibit similarities between the 2 disciplines, such as an evolving demand forecast according to acuity and the fact that provisioning "stock" to meet demand in a future period has nonzero variable lead time. Under assumptions about the forecasts (eg, the demand process is well fit as an autoregressive process) and about the labor supply process (≥1 shifts' lead time), we show that safety stock over lead time for such systems is effectively equivalent to the corresponding well-studied problem for systems with stationary demand bounds and base stock policies. Hence, we can apply existing models from supply chain analytics to find the optimal safety levels of nurse staffing. We use a case study with real data to demonstrate that there are significant benefits from the inclusion of the forecast process when determining the optimal safety stocks.

  1. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    PubMed

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  2. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    PubMed Central

    Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250

  3. A Simple Label Switching Algorithm for Semisupervised Structural SVMs.

    PubMed

    Balamurugan, P; Shevade, Shirish; Sundararajan, S

    2015-10-01

    In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.

  4. A new real-time guidance strategy for aerodynamic ascent flight

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takayuki; Kawaguchi, Jun'ichiro

    2007-12-01

    Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.

  5. Evaluating the effects of real power losses in optimal power flow based storage integration

    DOE PAGES

    Castillo, Anya; Gayme, Dennice

    2017-03-27

    This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less

  6. Implementation and testing of the travel time prediction system (TIPS) : final report, May 2001.

    DOT National Transportation Integrated Search

    2001-05-01

    The Travel Time Prediction System (TIPS) is a portable automated system for predicting and displaying travel time for motorists in advance of and through freeway construction work zones, on a real-time basis. It collects real-time traffic flow data u...

  7. Implementation and testing of the travel time prediction system (TIPS) : executive summary, May 2001.

    DOT National Transportation Integrated Search

    2001-05-01

    The Travel Time Prediction System (TIPS) is a portable automated system for predicting and displaying travel time for motorists in advance of and through freeway construction work zones, on a real-time basis. It collects real-time traffic flow data u...

  8. The Real World Significance of Performance Prediction

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu

    2012-01-01

    In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…

  9. Advanced modelling, monitoring, and process control of bioconversion systems

    NASA Astrophysics Data System (ADS)

    Schmitt, Elliott C.

    Production of fuels and chemicals from lignocellulosic biomass is an increasingly important area of research and industrialization throughout the world. In order to be competitive with fossil-based fuels and chemicals, maintaining cost-effectiveness is critical. Advanced process control (APC) and optimization methods could significantly reduce operating costs in the biorefining industry. Two reasons APC has previously proven challenging to implement for bioprocesses include: lack of suitable online sensor technology of key system components, and strongly nonlinear first principal models required to predict bioconversion behavior. To overcome these challenges batch fermentations with the acetogen Moorella thermoacetica were monitored with Raman spectroscopy for the conversion of real lignocellulosic hydrolysates and a kinetic model for the conversion of synthetic sugars was developed. Raman spectroscopy was shown to be effective in monitoring the fermentation of sugarcane bagasse and sugarcane straw hydrolysate, where univariate models predicted acetate concentrations with a root mean square error of prediction (RMSEP) of 1.9 and 1.0 g L-1 for bagasse and straw, respectively. Multivariate partial least squares (PLS) models were employed to predict acetate, xylose, glucose, and total sugar concentrations for both hydrolysate fermentations. The PLS models were more robust than univariate models, and yielded a percent error of approximately 5% for both sugarcane bagasse and sugarcane straw. In addition, a screening technique was discussed for improving Raman spectra of hydrolysate samples prior to collecting fermentation data. Furthermore, a mechanistic model was developed to predict batch fermentation of synthetic glucose, xylose, and a mixture of the two sugars to acetate. The models accurately described the bioconversion process with an RMSEP of approximately 1 g L-1 for each model and provided insights into how kinetic parameters changed during dual substrate fermentation with diauxic growth. Model predictive control (MPC), an advanced process control strategy, is capable of utilizing nonlinear models and sensor feedback to provide optimal input while ensuring critical process constraints are met. Using the microorganism Saccharomyces cerevisiae, a commonly used microorganism for biofuel production, and work performed with M. thermoacetica, a nonlinear MPC was implemented on a continuous membrane cell-recycle bioreactor (MCRB) for the conversion of glucose to ethanol. The dilution rate was used to control the ethanol productivity of the system will maintaining total substrate conversion above the constraint of 98%. PLS multivariate models for glucose (RMSEP 1.5 g L-1) and ethanol (RMSEP 0.4 g L-1) were robust in predicting concentrations and a mechanistic kinetic model built accurately predicted continuous fermentation behavior. A setpoint trajectory, ranging from 2 - 4.5 g L-1 h-1 for productivity was closely tracked by the fermentation system using Raman measurements and an extended Kalman filter to estimate biomass concentrations. Overall, this work was able to demonstrate an effective approach for real-time monitoring and control of a complex fermentation system.

  10. Prediction of Land use changes using CA in GIS Environment

    NASA Astrophysics Data System (ADS)

    Kiavarz Moghaddam, H.; Samadzadegan, F.

    2009-04-01

    Urban growth is a typical self-organized system that results from the interaction between three defined systems; developed urban system, natural non-urban system and planned urban system. Urban growth simulation for an artificial city is carried out first. It evaluates a number of urban sprawl parameters including the size and shape of neighborhood besides testing different types of constraints on urban growth simulation. The results indicate that circular-type neighborhood shows smoother but faster urban growth as compared to nine-cell Moore neighborhood. Cellular Automata is proved to be very efficient in simulating the urban growth simulation over time. The strength of this technology comes from the ability of urban modeler to implement the growth simulation model, evaluating the results and presenting the output simulation results in visual interpretable environment. Artificial city simulation model provides an excellent environment to test a number of simulation parameters such as neighborhood influence on growth results and constraints role in driving the urban growth .Also, CA rules definition is critical stage in simulating the urban growth pattern in a close manner to reality. CA urban growth simulation and prediction of Tehran over the last four decades succeeds to simulate specified tested growth years at a high accuracy level. Some real data layer have been used in the CA simulation training phase such as 1995 while others used for testing the prediction results such as 2002. Tuning the CA growth rules is important through comparing the simulated images with the real data to obtain feedback. An important notice is that CA rules need also to be modified over time to adapt to the urban growth pattern. The evaluation method used on region basis has its advantage in covering the spatial distribution component of the urban growth process. Next step includes running the developed CA simulation over classified raster data for three years in a developed ArcGIS extention. A set of crisp rules are defined and calibrated based on real urban growth pattern. Uncertainty analysis is performed to evaluate the accuracy of the simulated results as compared to the historical real data. Evaluation shows promising results represented by the high average accuracies achieved. The average accuracy for the predicted growth images 1964 and 2002 is over 80 %. Modifying CA growth rules over time to match the growth pattern changes is important to obtain accurate simulation. This modification is based on the urban growth relationship for Tehran over time as can be seen in the historical raster data. The feedback obtained from comparing the simulated and real data is crucial in identifying the optimal set of CA rules for reliable simulation and calibrating growth steps.

  11. OPTIMIZED REAL-TIME CONTROL OF COMBINED SEWERAGE SYSTEMS: TWO CASE STUDIES

    EPA Science Inventory

    The paper presents results of two case studies of Real-Time Control (RTC) alternatives evaluations that were conducted on portions of sewerage systems near Paris, France and in Quebec City, Canada, respectively. The studies were performed at real-scale demonstration sites. RTC ...

  12. Improving the twilight model for polar cap absorption nowcasts

    NASA Astrophysics Data System (ADS)

    Rogers, N. C.; Kero, A.; Honary, F.; Verronen, P. T.; Warrington, E. M.; Danskin, D. W.

    2016-11-01

    During solar proton events (SPE), energetic protons ionize the polar mesosphere causing HF radio wave attenuation, more strongly on the dayside where the effective recombination coefficient, αeff, is low. Polar cap absorption models predict the 30 MHz cosmic noise absorption, A, measured by riometers, based on real-time measurements of the integrated proton flux-energy spectrum, J. However, empirical models in common use cannot account for regional and day-to-day variations in the daytime and nighttime profiles of αeff(z) or the related sensitivity parameter, m=A>/&sqrt;J. Large prediction errors occur during twilight when m changes rapidly, and due to errors locating the rigidity cutoff latitude. Modeling the twilight change in m as a linear or Gauss error-function transition over a range of solar-zenith angles (χl < χ < χu) provides a better fit to measurements than selecting day or night αeff profiles based on the Earth-shadow height. Optimal model parameters were determined for several polar cap riometers for large SPEs in 1998-2005. The optimal χl parameter was found to be most variable, with smaller values (as low as 60°) postsunrise compared with presunset and with positive correlation between riometers over a wide area. Day and night values of m exhibited higher correlation for closely spaced riometers. A nowcast simulation is presented in which rigidity boundary latitude and twilight model parameters are optimized by assimilating age-weighted measurements from 25 riometers. The technique reduces model bias, and root-mean-square errors are reduced by up to 30% compared with a model employing no riometer data assimilation.

  13. A toxicity cost function approach to optimal CPA equilibration in tissues.

    PubMed

    Benson, James D; Higgins, Adam Z; Desai, Kunjan; Eroglu, Ali

    2018-02-01

    There is growing need for cryopreserved tissue samples that can be used in transplantation and regenerative medicine. While a number of specific tissue types have been successfully cryopreserved, this success is not general, and there is not a uniform approach to cryopreservation of arbitrary tissues. Additionally, while there are a number of long-established approaches towards optimizing cryoprotocols in single cell suspensions, and even plated cell monolayers, computational approaches in tissue cryopreservation have classically been limited to explanatory models. Here we develop a numerical approach to adapt cell-based CPA equilibration damage models for use in a classical tissue mass transport model. To implement this with real-world parameters, we measured CPA diffusivity in three human-sourced tissue types, skin, fibroid and myometrium, yielding propylene glycol diffusivities of 0.6 × 10 -6  cm 2 /s, 1.2 × 10 -6  cm 2 /s and 1.3 × 10 -6  cm 2 /s, respectively. Based on these results, we numerically predict and compare optimal multistep equilibration protocols that minimize the cell-based cumulative toxicity cost function and the damage due to excessive osmotic gradients at the tissue boundary. Our numerical results show that there are fundamental differences between protocols designed to minimize total CPA exposure time in tissues and protocols designed to minimize accumulated CPA toxicity, and that "one size fits all" stepwise approaches are predicted to be more toxic and take considerably longer than needed. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  15. Design of teleoperation system with a force-reflecting real-time simulator

    NASA Technical Reports Server (NTRS)

    Hirata, Mitsunori; Sato, Yuichi; Nagashima, Fumio; Maruyama, Tsugito

    1994-01-01

    We developed a force-reflecting teleoperation system that uses a real-time graphic simulator. This system eliminates the effects of communication time delays in remote robot manipulation. The simulator provides the operator with predictive display and feedback of computed contact forces through a six-degree of freedom (6-DOF) master arm on a real-time basis. With this system, peg-in-hole tasks involving round-trip communication time delays of up to a few seconds were performed at three support levels: a real image alone, a predictive display with a real image, and a real-time graphic simulator with computed-contact-force reflection and a predictive display. The experimental results indicate the best teleoperation efficiency was achieved by using the force-reflecting simulator with two images. The shortest work time, lowest sensor maximum, and a 100 percent success rate were obtained. These results demonstrate the effectiveness of simulated-force-reflecting teleoperation efficiency.

  16. Enhancing the 'real world' prediction of cardiovascular events and major bleeding with the CHA2DS2-VASc and HAS-BLED scores using multiple biomarkers.

    PubMed

    Roldán, Vanessa; Rivera-Caravaca, José Miguel; Shantsila, Alena; García-Fernández, Amaya; Esteve-Pastor, María Asunción; Vilchez, Juan Antonio; Romera, Marta; Valdés, Mariano; Vicente, Vicente; Marín, Francisco; Lip, Gregory Y H

    2018-02-01

    Atrial fibrillation (AF)-European guidelines suggest the use of biomarkers to stratify patients for stroke and bleeding risks. We investigated if a multibiomarker strategy improved the predictive performance of CHA 2 DS 2 -VASc and HAS-BLED in anticoagulated AF patients. We included consecutive patients stabilized for six months on vitamin K antagonists (INRs 2.0-3.0). High sensitivity troponin T, NT-proBNP, interleukin-6, von Willebrand factor concentrations and glomerular filtration rate (eGFR; using MDRD-4 formula) were quantified at baseline. Time in therapeutic range (TTR) was recorded at six months after inclusion. Patients were follow-up during a median of 2375 (IQR 1564-2887) days and all adverse events were recorded. In 1361 patients, adding four blood biomarkers, TTR and MDRD-eGFR, the predictive value of CHA 2 DS 2 -VASc increased significantly by c-index (0.63 vs. 0.65; p = .030) and IDI (0.85%; p < .001), but not by NRI (-2.82%; p < .001). The predictive value of HAS-BLED increased up to 1.34% by IDI (p < .001). Nevertheless, the overall predictive value remains modest (c-indexes approximately 0.65) and decision curve analyses found lower net benefit compared with the originals scores. Addition of biomarkers enhanced the predictive value of CHA 2 DS 2 -VASc and HAS-BLED, although the overall improvement was modest and the added predictive advantage over original scores was marginal. Key Messages Recent atrial fibrillation (AF)-European guidelines for the first time suggest the use of biomarkers to stratify patients for stroke and bleeding risks, but their usefulness in real world for risk stratification is still questionable. In this cohort study involving 1361 AF patients optimally anticoagulated with vitamin K antagonists, adding high sensitivity troponin T, N-terminal pro-B-type natriuretic peptide, interleukin 6, von Willebrand factor, glomerular filtration rate (by the MDRD-4 formula) and time in therapeutic range, increased the predictive value of CHA 2 DS 2 -VASc for cardiovascular events, but not the predictive value of HAS-BLED for major bleeding. Reclassification analyses did not show improvement adding multiple biomarkers. Despite the improvement observed, the added predictive advantage is marginal and the clinical usefulness and net benefit over current clinical scores is lower.

  17. Evaluation of traffic signal timing optimization methods using a stochastic and microscopic simulation program.

    DOT National Transportation Integrated Search

    2003-01-01

    This study evaluated existing traffic signal optimization programs including Synchro,TRANSYT-7F, and genetic algorithm optimization using real-world data collected in Virginia. As a first step, a microscopic simulation model, VISSIM, was extensively ...

  18. RTDS implementation of an improved sliding mode based inverter controller for PV system.

    PubMed

    Islam, Gazi; Muyeen, S M; Al-Durra, Ahmed; Hasanien, Hany M

    2016-05-01

    This paper proposes a novel approach for testing dynamics and control aspects of a large scale photovoltaic (PV) system in real time along with resolving design hindrances of controller parameters using Real Time Digital Simulator (RTDS). In general, the harmonic profile of a fast controller has wide distribution due to the large bandwidth of the controller. The major contribution of this paper is that the proposed control strategy gives an improved voltage harmonic profile and distribute it more around the switching frequency along with fast transient response; filter design, thus, becomes easier. The implementation of a control strategy with high bandwidth in small time steps of Real Time Digital Simulator (RTDS) is not straight forward. This paper shows a good methodology for the practitioners to implement such control scheme in RTDS. As a part of the industrial process, the controller parameters are optimized using particle swarm optimization (PSO) technique to improve the low voltage ride through (LVRT) performance under network disturbance. The response surface methodology (RSM) is well adapted to build analytical models for recovery time (Rt), maximum percentage overshoot (MPOS), settling time (Ts), and steady state error (Ess) of the voltage profile immediate after inverter under disturbance. A systematic approach of controller parameter optimization is detailed. The transient performance of the PSO based optimization method applied to the proposed sliding mode controlled PV inverter is compared with the results from genetic algorithm (GA) based optimization technique. The reported real time implementation challenges and controller optimization procedure are applicable to other control applications in the field of renewable and distributed generation systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Type- and Subtype-Specific Influenza Forecast.

    PubMed

    Kandula, Sasikiran; Yang, Wan; Shaman, Jeffrey

    2017-03-01

    Prediction of the growth and decline of infectious disease incidence has advanced considerably in recent years. As these forecasts improve, their public health utility should increase, particularly as interventions are developed that make explicit use of forecast information. It is the task of the research community to increase the content and improve the accuracy of these infectious disease predictions. Presently, operational real-time forecasts of total influenza incidence are produced at the municipal and state level in the United States. These forecasts are generated using ensemble simulations depicting local influenza transmission dynamics, which have been optimized prior to forecast with observations of influenza incidence and data assimilation methods. Here, we explore whether forecasts targeted to predict influenza by type and subtype during 2003-2015 in the United States were more or less accurate than forecasts targeted to predict total influenza incidence. We found that forecasts separated by type/subtype generally produced more accurate predictions and, when summed, produced more accurate predictions of total influenza incidence. These findings indicate that monitoring influenza by type and subtype not only provides more detailed observational content but supports more accurate forecasting. More accurate forecasting can help officials better respond to and plan for current and future influenza activity. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. PAMPA--critical factors for better predictions of absorption.

    PubMed

    Avdeef, Alex; Bendels, Stefanie; Di, Li; Faller, Bernard; Kansy, Manfred; Sugano, Kiyohiko; Yamauchi, Yukinori

    2007-11-01

    PAMPA, log P(OCT), and Caco-2 are useful tools in drug discovery for the prediction of oral absorption, brain penetration and for the development of structure-permeability relationships. Each approach has its advantages and limitations. Selection criteria for methods are based on many different factors: predictability, throughput, cost and personal preferences (people factor). The PAMPA concerns raised by Galinis-Luciani et al. (Galinis-Luciani et al., 2007, J Pharm Sci, this issue) are answered by experienced PAMPA practitioners, inventors and developers from diverse research organizations. Guidelines on how to use PAMPA are discussed. PAMPA and PAMPA-BBB have much better predictivity for oral absorption and brain penetration than log P(OCT) for real-world drug discovery compounds. PAMPA and Caco-2 have similar predictivity for passive oral absorption. However, it is not advisable to use PAMPA to predict absorption involving transporter-mediated processes, such as active uptake or efflux. Measurement of PAMPA is much more rapid and cost effective than Caco-2 and log P(OCT). PAMPA assay conditions are critical in order to generate high quality and relevant data, including permeation time, assay pH, stirring, use of cosolvents and selection of detection techniques. The success of using PAMPA in drug discovery depends on careful data interpretation, use of optimal assay conditions, implementation and integration strategies, and education of users. Copyright 2007 Wiley-Liss, Inc.

Top